My All 2020 Mathematics A to Z: Michael Atiyah

To start this year’s great glossary project Mr Wu, author of the blog, had a great suggestion: The Atiyah-Singer Index Theorem. It’s an important and spectacular piece of work. I’ll explain why I’m not doing that in a few sentences.

Mr Wu pointed out that a biography of Michael Atiyah, one of the authors of this theorem, might be worth doing. GoldenOj endorsed the biography idea, and the more I thought it over the more I liked it. I’m not able to do a true biography, something that goes to primary sources and finds a convincing story of a life. But I can sketch out a bit, exploring his work and why it’s of note.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Michael Atiyah.

Theodore Frankel’s The Geometry of Physics: An Introduction is a wonderful book. It’s 686 pages, including the index. It all explores how our modern understanding of physics is our modern understanding of geometry. On page 465 it offers this:

The Atiyah-Singer index theorem must be considered a high point of geometrical analysis of the twentieth century, but is far too complicated to be considered in this book.

I know when I’m licked. Let me attempt to look at one of the people behind this theorem instead.

The Riemann Hypothesis is about where to find the roots of a particular infinite series. It’s been out there, waiting for a solution, for a century and a half. There are many interesting results which we would know to be true if the Riemann Hypothesis is true. In 2018, Michael Atiyah declared that he had a proof. And, more, an amazing proof, a short proof. Albeit one that depended on a great deal of background work and careful definitions. The mathematical community was skeptical. It still is. But it did not dismiss outright the idea that he had a solution. It was plausible that Atiyah might solve one of the greatest problems of mathematics in something that fits on a few PowerPoint slides.

So think of a person who commands such respect.

His proof of the Riemann Hypothesis, as best I understand, is not generally accepted. For example, it includes the fine structure constant. This comes from physics. It describes how strongly electrons and photons interact. The most compelling (to us) consequence of the Riemann Hypothesis is in how prime numbers are distributed among the integers. It’s hard to think how photons and prime numbers could relate. But, then, if humans had done all of mathematics without noticing geometry, we would know there is something interesting about π. Differential equations, if nothing else, would turn up this number. We happened to discover π in the real world first too. If it were not familiar for so long, would we think there should be any commonality between differential equations and circles?

I do not mean to say Atiyah is right and his critics wrong. I’m no judge of the matter at all. What is interesting is that one could imagine a link between a pure number-theory matter like the Riemann hypothesis and a physical matter like the fine structure constant. It’s not surprising that mathematicians should be interested in physics, or vice-versa. Atiyah’s work was particularly important. Much of his work, from the late 70s through the 80s, was in gauge theory. This subject lies under much of modern quantum mechanics. It’s born of the recognition of symmetries, group operations that you can do on a field, such as the electromagnetic field.

In a sequence of papers Atiyah, with other authors, sorted out particular cases of how magnetic monopoles and instantons behave. Magnetic monopoles may sound familiar, even though no one has ever seen one. These are magnetic points, an isolated north or a south pole without its opposite partner. We can understand well how they would act without worrying about whether they exist. Instantons are more esoteric; I don’t remember encountering the term before starting my reading for this essay. I believe I did, encountering the technique as a way to describe the transitions between one quantum state and another. Perhaps the name failed to stick. I can see where there are few examples you could give an undergraduate physics major. And it turns out that monopoles appear as solutions to some problems involving instantons.

This was, for Atiyah, later work. It arose, in part, from bringing the tools of index theory to nonlinear partial differential equations. This index theory is the thing that got us the Atiyah-Singer Index Theorem too complicated to explain in 686 pages. Index theory, here, studies questions like “what can we know about a differential equation without solving it?” Solving a differential equation would tell us almost everything we’d like to know, yes. But it’s also quite hard. Index theory can tell us useful things like: is there a solution? Is there more than one? How many? And it does this through topological invariants. A topological invariant is a trait like, for example, the number of holes that go through a solid object. These things are indifferent to operations like moving the object, or rotating it, or reflecting it. In the language of group theory, they are invariant under a symmetry.

It’s startling to think a question like “is there a solution to this differential equation” has connections to what we know about shapes. This shows some of the power of recasting problems as geometry questions. From the late 50s through the mid-70s, Atiyah was a key person working in a topic that is about shapes. We know it as K-theory. The “K” from the German Klasse, here. It’s about groups, in the abstract-algebra sense; the things in the groups are themselves classes of isomorphisms. Michael Atiyah and Friedrich Hirzebruch defined this sort of group for a topological space in 1959. And this gave definition to topological K-theory. This is again abstract stuff. Frankel’s book doesn’t even mention it. It explores what we can know about shapes from the tangents to the shapes.

And it leads into cobordism, also called bordism. This is about what you can know about shapes which could be represented as cross-sections of a higher-dimension shape. The iconic, and delightfully named, shape here is the pair of pants. In three dimensions this shape is a simple cartoon of what it’s named. On the one end, it’s a circle. On the other end, it’s two circles. In between, it’s a continuous surface. Imagine the cross-sections, how on separate layers the two circles are closer together. How their shapes distort from a real circle. In one cross-section they come together. They appear as two circles joined at a point. In another, they’re a two-looped figure. In another, a smoother circle. Knowing that Atiyah came from these questions may make his future work seem more motivated.

But how does one come to think of the mathematics of imaginary pants? Many ways. Atiyah’s path came from his first research specialty, which was algebraic geometry. This was his work through much of the 1950s. Algebraic geometry is about the kinds of geometric problems you get from studying algebra problems. Algebra here means the abstract stuff, although it does touch on the algebra from high school. You might, for example, do work on the roots of a polynomial, or a comfortable enough equation like x^2 + y^2 = 1 . Atiyah had started — as an undergraduate — working on projective geometries. This is what one curve looks like projected onto a different surface. This moved into elliptic curves and into particular kinds of transformations on surfaces. And algebraic geometry has proved important in number theory. You might remember that the Wiles-Taylor proof of Fermat’s Last Theorem depended on elliptic curves. Some work on the Riemann hypothesis is built on algebraic topology.

(I would like to trace things farther back. But the public record of Atiyah’s work doesn’t offer hints. I can find amusing notes like his father asserting he knew he’d be a mathematician. He was quite good at changing local currency into foreign currency, making a profit on the deal.)

It’s possible to imagine this clear line in Atiyah’s career, and why his last works might have been on the Riemann hypothesis. That’s too pat an assertion. The more interesting thing is that Atiyah had several recognizable phases and did iconic work in each of them. There is a cliche that mathematicians do their best work before they are 40 years old. And, it happens, Atiyah did earn a Fields Medal, given to mathematicians for the work done before they are 40 years old. But I believe this cliche represents a misreading of biographies. I suspect that first-rate work is done when a well-prepared mind looks fresh at a new problem. A mathematician is likely to have these traits line up early in the career. Grad school demands the deep focus on a particular problem. Getting out of grad school lets one bring this deep knowledge to fresh questions.

It is easy, in a career, to keep studying problems one has already had great success in, for good reason and with good results. It tends not to keep producing revolutionary results. Atiyah was able — by chance or design I can’t tell — to several times venture into a new field. The new field was one that his earlier work prepared him for, yes. But it posed new questions about novel topics. And this creative, well-trained mind focusing on new questions produced great work. And this is one way to be credible when one announces a proof of the Riemann hypothesis.

Here is something I could not find a clear way to fit into this essay. Atiyah recorded some comments about his life for the Web of Stories site. These are biographical and do not get into his mathematics at all. Much of it is about his life as child of British and Lebanese parents and how that affected his schooling. One that stood out to me was about his peers at Manchester Grammar School, several of whom he rated as better students than he was. Being a good student is not tightly related to being a successful academic. Particularly as so much of a career depends on chance, on opportunities happening to be open when one is ready to take them. It would be remarkable if there wre three people of greater talent than Atiyah who happened to be in the same school at the same time. It’s not unthinkable, though, and we may wonder what we can do to give people the chance to do what they are good in. (I admit this assumes that one finds doing what one is good in particularly satisfying or fulfilling.) In looking at any remarkable talent it’s fair to ask how much of their exceptional nature is that they had a chance to excel.

The Summer 2017 Mathematics A To Z: Zeta Function

Today Gaurish, of For the love of Mathematics, gives me the last subject for my Summer 2017 A To Z sequence. And also my greatest challenge: the Zeta function. The subject comes to all pop mathematics blogs. It comes to all mathematics blogs. It’s not difficult to say something about a particular zeta function. But to say something at all original? Let’s watch.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Zeta Function.

The spring semester of my sophomore year I had Intro to Complex Analysis. Monday Wednesday 7:30; a rare evening class, one of the few times I’d eat dinner and then go to a lecture hall. There I discovered something strange and wonderful. Complex Analysis is a far easier topic than Real Analysis. Both are courses about why calculus works. But why calculus for complex-valued numbers works is a much easier problem than why calculus for real-valued numbers works. It’s dazzling. Part of this is that Complex Analysis, yes, builds on Real Analysis. So Complex can take for granted some things that Real has to prove. I didn’t mind. Given the way I crashed through Intro to Real Analysis I was glad for a subject that was, relatively, a breeze.

As we worked through Complex Variables and Applications so many things, so very many things, got to be easy. The basic unit of complex analysis, at least as we young majors learned it, was in contour integrals. These are integrals whose value depends on the values of a function on a closed loop. The loop is in the complex plane. The complex plane is, well, your ordinary plane. But we say the x-coordinate and the y-coordinate are parts of the same complex-valued number. The x-coordinate is the real-valued part. The y-coordinate is the imaginary-valued part. And we call that summation ‘z’. In complex-valued functions ‘z’ serves the role that ‘x’ does in normal mathematics.

So a closed loop is exactly what you think. Take a rubber band and twist it up and drop it on the table. That’s a closed loop. Suppose you want to integrate a function, ‘f(z)’. If you can always take its derivative on this loop and on the interior of that loop, then its contour integral is … zero. No matter what the function is. As long as it’s “analytic”, as the terminology has it. Yeah, we were all stunned into silence too. (Granted, mathematics classes are usually quiet, since it’s hard to get a good discussion going. Plus many of us were in post-dinner digestive lulls.)

Integrating regular old functions of real-valued numbers is this tedious process. There’s sooooo many rules and possibilities and special cases to consider. There’s sooooo many tricks that get you the integrals of some functions. And then here, with complex-valued integrals for analytic functions, you know the answer before you even look at the function.

As you might imagine, since this is only page 113 of a 341-page book there’s more to it. Most functions that anyone cares about aren’t analytic. At least they’re not analytic everywhere inside regions that might be interesting. There’s usually some points where an interesting function ‘f(z)’ is undefined. We call these “singularities”. Yes, like starships are always running into. Only we rarely get propelled into other universes or other times or turned into ghosts or stuff like that.

So much of the rest of the course turns into ways to avoid singularities. Sometimes you can spackel them over. This is when the function happens not to be defined somewhere, but you can see what it ought to be. Sometimes you have to do something more. This turns into a search for “removable” singularities. And this does something so brilliant it looks illicit. You modify your closed loop, so that it comes up very close, as close as possible, to the singularity, but studiously avoids it. Follow this game of I’m-not-touching-you right and you can turn your integral into two parts. One is the part that’s equal to zero. The other is the part that’s a constant times whatever the function is at the singularity you’re removing. And that ought to be easy to find the value for. (Being able to find a function’s value doesn’t mean you can find its derivative.)

Those tricks were hard to master. Not because they were hard. Because they were easy, in a context where we expected hard. But after that we got into how to move singularities. That is, how to do a change of variables that moved the singularities to where they’re more convenient for some reason. How could this be more convenient? Because of chapter five, series. In regular old calculus we learn how to approximate well-behaved functions with polynomials. In complex-variable calculus, we learn the same thing all over again. They’re polynomials of complex-valued variables, but it’s the same sort of thing. And not just polynomials, but things that look like polynomials except they’re powers of \frac{1}{z} instead. These open up new ways to approximate functions, and to remove singularities from functions.

And then we get into transformations. These are about turning a problem that’s hard into one that’s easy. Or at least different. They’re a change of variable, yes. But they also change what exactly the function is. This reshuffles the problem. Makes for a change in singularities. Could make ones that are easier to work with.

One of the useful, and so common, transforms is called the Laplace-Stieltjes Transform. (“Laplace” is said like you might guess. “Stieltjes” is said, or at least we were taught to say it, like “Stilton cheese” without the “ton”.) And it tends to create functions that look like a series, the sum of a bunch of terms. Infinitely many terms. Each of those terms looks like a number times another number raised to some constant times ‘z’. As the course came to its conclusion, we were all prepared to think about these infinite series. Where singularities might be. Which of them might be removable.

These functions, these results of the Laplace-Stieltjes Transform, we collectively call ‘zeta functions’. There are infinitely many of them. Some of them are relatively tame. Some of them are exotic. One of them is world-famous. Professor Walsh — I don’t mean to name-drop, but I discovered the syllabus for the course tucked in the back of my textbook and I’m delighted to rediscover it — talked about it.

That world-famous one is, of course, the Riemann Zeta function. Yes, that same Riemann who keeps turning up, over and over again. It looks simple enough. Almost tame. Take the counting numbers, 1, 2, 3, and so on. Take your ‘z’. Raise each of the counting numbers to that ‘z’. Take the reciprocals of all those numbers. Add them up. What do you get?

A mass of fascinating results, for one. Functions you wouldn’t expect are concealed in there. There’s strips where the real part is zero. There’s strips where the imaginary part is zero. There’s points where both the real and imaginary parts are zero. We know infinitely many of them. If ‘z’ is -2, for example, the sum is zero. Also if ‘z’ is -4. -6. -8. And so on. These are easy to show, and so are dubbed ‘trivial’ zeroes. To say some are ‘trivial’ is to say that there are others that are not trivial. Where are they?

Professor Walsh explained. We know of many of them. The nontrivial zeroes we know of all share something in common. They have a real part that’s equal to 1/2. There’s a zero that’s at about the number \frac{1}{2} - \imath 14.13 . Also at \frac{1}{2} + \imath 14.13 . There’s one at about \frac{1}{2} - \imath 21.02 . Also about \frac{1}{2} + \imath 21.02 . (There’s a symmetry, you maybe guessed.) Every nontrivial zero we’ve found has a real component that’s got the same real-valued part. But we don’t know that they all do. Nobody does. It is the Riemann Hypothesis, the great unsolved problem of mathematics. Much more important than that Fermat’s Last Theorem, which back then was still merely a conjecture.

What a prospect! What a promise! What a way to set us up for the final exam in a couple of weeks.

I had an inspiration, a kind of scheme of showing that a nontrivial zero couldn’t be within a given circular contour. Make the size of this circle grow. Move its center farther away from the z-coordinate \frac{1}{2} + \imath 0 to match. Show there’s still no nontrivial zeroes inside. And therefore, logically, since I would have shown nontrivial zeroes couldn’t be anywhere but on this special line, and we know nontrivial zeroes exist … I leapt enthusiastically into this project. A little less enthusiastically the next day. Less so the day after. And on. After maybe a week I went a day without working on it. But came back, now and then, prodding at my brilliant would-be proof.

The Riemann Zeta function was not on the final exam, which I’ve discovered was also tucked into the back of my textbook. It asked more things like finding all the singular points and classifying what kinds of singularities they were for functions like e^{-\frac{1}{z}} instead. If the syllabus is accurate, we got as far as page 218. And I’m surprised to see the professor put his e-mail address on the syllabus. It was merely “bwalsh@math”, but understand, the Internet was a smaller place back then.

I finished the course with an A-, but without answering any of the great unsolved problems of mathematics.

The Summer 2017 Mathematics A To Z: L-function

I’m brought back to elliptic curves today thanks to another request from Gaurish, of the For The Love Of Mathematics blog. Interested in how that’s going to work out? Me too.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

So stop me if you’ve heard this one before. We’re going to make something interesting. You bring to it a complex-valued number. Anything you like. Let me call it ‘s’ for the sake of convenience. I know, it’s weird not to call it ‘z’, but that’s how this field of mathematics developed. I’m going to make a series built on this. A series is the sum of all the terms in a sequence. I know, it seems weird for a ‘series’ to be a single number, but that’s how that field of mathematics developed. The underlying sequence? I’ll make it in three steps. First, I start with all the counting numbers: 1, 2, 3, 4, 5, and so on. Second, I take each one of those terms and raise them to the power of your ‘s’. Third, I take the reciprocal of each of them. That’s the sequence. And when we add —

Yes, that’s right, it’s the Riemann-Zeta Function. The one behind the Riemann Hypothesis. That’s the mathematical conjecture that everybody loves to cite as the biggest unsolved problem in mathematics now that we know someone did something about Fermat’s Last Theorem. The conjecture is about what the zeroes of this function are. What values of ‘s’ make this sum equal to zero? Some boring ones. Zero, negative two, negative four, negative six, and so on. It has a lot of non-boring zeroes. All the ones we know of have an ‘s’ with a real part of ½. So far we know of at least 36 billion values of ‘s’ that make this add up to zero. They’re all ½ plus some imaginary number. We conjecture that this isn’t coincidence and all the non-boring zeroes are like that. We might be wrong. But it’s the way I would bet.

Anyone who’d be reading this far into a pop mathematics blog knows something of why the Riemann Hypothesis is interesting. It carries implications about prime numbers. It tells us things about a host of other theorems that are nice to have. Also they know it’s hard to prove. Really, really hard.

Ancient mathematical lore tells us there are a couple ways to solve a really, really hard problem. One is to narrow its focus. Try to find as simple a case of it as you can solve. Maybe a second simple case you can solve. Maybe a third. This could show you how, roughly, to solve the general problem. Not always. Individual cases of Fermat’s Last Theorem are easy enough to solve. You can show that a^3 + b^3 = c^3 doesn’t have any non-boring answers where a, b, and c are all positive whole numbers. Same with a^5 + b^5 = c^5 , though it takes longer. That doesn’t help you with the general a^n + b^n = c^n .

There’s another approach. It sounds like the sort of crazy thing Captain Kirk would get away with. It’s to generalize, to make a bigger, even more abstract problem. Sometimes that makes it easier.

For the Riemann-Zeta Function there’s one compelling generalization. It fits into that sequence I described making. After taking the reciprocals of integers-raised-to-the-s-power, multiply each by some number. Which number? Well, that depends on what you like. It could be the same number every time, if you like. That’s boring, though. That’s just the Riemann-Zeta Function times your number. It’s more interesting if what number you multiply by depends on which integer you started with. (Do not let it depend on ‘s’; that’s more complicated than you want.) When you do that? Then you’ve created an L-Function.

Specifically, you’ve created a Dirichlet L-Function. Dirichlet here is Peter Gustav Lejeune Dirichlet, a 19th century German mathematician who got his name on like everything. He did major work on partial differential equations, on Fourier series, on topology, in algebra, and on number theory, which is what we’d call these L-functions. There are other L-Functions, with identifying names such as Artin and Hecke and Euler, which get more directly into group theory. They look much like the Dirichlet L-Function. In building the sequence I described in the top paragraph, they do something else for the second step.

The L-Function is going to look like this:

L(s) = \sum_{n \ge 1}^{\infty} a_n \cdot \frac{1}{n^s}

The sigma there means to evaluate the thing that comes after it for each value of ‘n’ starting at 1 and increasing, by 1, up to … well, something infinitely large. The a_n are the numbers you’ve picked. They’re some value that depend on the index ‘n’, but don’t depend on the power ‘s’. This may look funny but it’s a standard way of writing the terms in a sequence.

An L-Function has to meet some particular criteria that I’m not going to worry about here. Look them up before you get too far into your research. These criteria give us ways to classify different L-Functions, though. We can describe them by degree, much as we describe polynomials. We can describe them by signature, part of those criteria I’m not getting into. We can describe them by properties of the extra numbers, the ones in that fourth step that you multiply the reciprocals by. And so on. LMFDB, an encyclopedia of L-Functions, lists eight or nine properties usable for a taxonomy of these things. (The ambiguity is in what things you consider to depend on what other things.)

What makes this interesting? For one, everything that makes the Riemann Hypothesis interesting. The Riemann-Zeta Function is a slice of the L-Functions. But there’s more. They merge into elliptic curves. Every elliptic curve corresponds to some L-Function. We can use the elliptic curve or the L-Function to prove what we wish to show. Elliptic curves are subject to group theory; so, we can bring group theory into these series.

And then it gets deeper. It always does. Go back to that formula for the L-Function like I put in mathematical symbols. I’m going to define a new function. It’s going to look a lot like a polynomial. Well, that L(s) already looked a lot like a polynomial, but this is going to look even more like one.

Pick a number τ. It’s complex-valued. Any number. All that I care is that its imaginary part be positive. In the trade we say that’s “in the upper half-plane”, because we often draw complex-valued numbers as points on a plane. The real part serves as the horizontal and the imaginary part serves as the vertical axis.

Now go back to your L-Function. Remember those a_n numbers you picked? Good. I’m going to define a new function based on them. It looks like this:

f(\tau) = \sum_{n \ge 1}^{\infty} a_n \left(  e^{2 \pi \imath \tau}\right)^n

You see what I mean about looking like a polynomial? If τ is a complex-valued number, then e^{2 \pi \imath \tau} is just another complex-valued number. If we gave that a new name like ‘z’, this function would look like the sum of constants times z raised to positive powers. We’d never know it was any kind of weird polynomial.

Anyway. This new function ‘f(τ)’ has some properties. It might be something called a weight-2 Hecke eigenform, a thing I am not going to explain without charging someone by the hour. But see the logic here: every elliptic curve matches with some kind of L-Function. Each L-Function matches with some ‘f(τ)’ kind of function. Those functions might or might not be these weight-2 Hecke eigenforms.

So here’s the thing. There was a big hypothesis formed in the 1950s that every rational elliptic curve matches to one of these ‘f(τ)’ functions that’s one of these eigenforms. It’s true. It took decades to prove. You may have heard of it, as the Taniyama-Shimura Conjecture. In the 1990s Wiles and Taylor proved this was true for a lot of elliptic curves, which is what proved Fermat’s Last Theorem after all that time. The rest of it was proved around 2000.

As I said, sometimes you have to make your problem bigger and harder to get something interesting out of it.

I mentioned this above. LMFDB is a fascinating site worth looking at. It’s got a lot of L-Function and Riemann-Zeta function-related materials.

The End 2016 Mathematics A To Z: Xi Function

I have today another request from gaurish, who’s also been good enough to give me requests for ‘Y’ and ‘Z’. I apologize for coming to this a day late. But it was Christmas and many things demanded my attention.

Xi Function.

We start with complex-valued numbers. People discovered them because they were useful tools to solve polynomials. They turned out to be more than useful fictions, if numbers are anything more than useful fictions. We can add and subtract them easily. Multiply and divide them less easily. We can even raise them to powers, or raise numbers to them.

If you become a mathematics major then somewhere in Intro to Complex Analysis you’re introduced to an exotic, infinitely large sum. It’s spoken of reverently as the Riemann Zeta Function, and it connects to something named the Riemann Hypothesis. Then you remember that you’ve heard of this, because if you’re willing to become a mathematics major you’ve read mathematics popularizations. And you know the Riemann Hypothesis is an unsolved problem. It proposes something that might be true or might be false. Either way has astounding implications for the way numbers fit together.

Riemann here is Bernard Riemann, who’s turned up often in these A To Z sequences. We saw him in spheres and in sums, leading to integrals. We’ll see him again. Riemann just covered so much of 19th century mathematics; we can’t talk about calculus without him. Zeta, Xi, and later on, Gamma are the famous Greek letters. Mathematicians fall back on them because the Roman alphabet just hasn’t got enough letters for our needs. I’m writing them out as English words instead because if you aren’t familiar with them they look like an indistinct set of squiggles. Even if you are familiar, sometimes. I got confused in researching this some because I did slip between a lowercase-xi and a lowercase-zeta in my mind. All I can plead is it’s been a hard week.

Riemann’s Zeta function is famous. It’s easy to approach. You can write it as a sum. An infinite sum, but still, those are easy to understand. Pick a complex-valued number. I’ll call it ‘s’ because that’s the standard. Next take each of the counting numbers: 1, 2, 3, and so on. Raise each of them to the power ‘s’. And take the reciprocal, one divided by those numbers. Add all that together. You’ll get something. Might be real. Might be complex-valued. Might be zero. We know many values of ‘s’ what would give us a zero. The Riemann Hypothesis is about characterizing all the possible values of ‘s’ that give us a zero. We know some of them, so boring we call them trivial: -2, -4, -6, -8, and so on. (This looks crazy. There’s another way of writing the Riemann Zeta function which makes it obvious instead.) The Riemann Hypothesis is about whether all the proper, that is, non-boring values of ‘s’ that give us a zero are 1/2 plus some imaginary number.

It’s a rare thing mathematicians have only one way of writing. If something’s been known and studied for a long time there are usually variations. We find different ways to write the problem. Or we find different problems which, if solved, would solve the original problem. The Riemann Xi function is an example of this.

I’m going to spare you the formula for it. That’s in self-defense. I haven’t found an expression of the Xi function that isn’t a mess. The normal ways to write it themselves call on the Zeta function, as well as the Gamma function. The Gamma function looks like factorials, for the counting numbers. It does its own thing for other complex-valued numbers.

That said, I’m not sure what the advantages are in looking at the Xi function. The one that people talk about is its symmetry. Its value at a particular complex-valued number ‘s’ is the same as its value at the number ‘1 – s’. This may not seem like much. But it gives us this way of rewriting the Riemann Hypothesis. Imagine all the complex-valued numbers with the same imaginary part. That is, all the numbers that we could write as, say, ‘x + 4i’, where ‘x’ is some real number. If the size of the value of Xi, evaluated at ‘x + 4i’, always increases as ‘x’ starts out equal to 1/2 and increases, then the Riemann hypothesis is true. (This has to be true not just for ‘x + 4i’, but for all possible imaginary numbers. So, ‘x + 5i’, and ‘x + 6i’, and even ‘x + 4.1 i’ and so on. But it’s easier to start with a single example.)

Or another way to write it. Suppose the size of the value of Xi, evaluated at ‘x + 4i’ (or whatever), always gets smaller as ‘x’ starts out at a negative infinitely large number and keeps increasing all the way to 1/2. If that’s true, and true for every imaginary number, including ‘x – i’, then the Riemann hypothesis is true.

And it turns out if the Riemann hypothesis is true we can prove the two cases above. We’d write the theorem about this in our papers with the start ‘The Following Are Equivalent’. In our notes we’d write ‘TFAE’, which is just as good. Then we’d take which ever of them seemed easiest to prove and find out it isn’t that easy after all. But if we do get through we declare ourselves fortunate, sit back feeling triumphant, and consider going out somewhere to celebrate. But we haven’t got any of these alternatives solved yet. None of the equivalent ways to write it has helped so far.

We know some some things. For example, we know there are infinitely many roots for the Xi function with a real part that’s 1/2. This is what we’d need for the Riemann hypothesis to be true. But we don’t know that all of them are.

The Xi function isn’t entirely about what it can tell us for the Zeta function. The Xi function has its own exotic and wonderful properties. In a 2009 paper on, for example, Drs Yang-Hui He, Vishnu Jejjala, and Djordje Minic describe how if the zeroes of the Xi function are all exactly where we expect them to be then we learn something about a particular kind of string theory. I admit not knowing just what to say about a genus-one free energy of the topological string past what I have read in this paper. In another paper they write of how the zeroes of the Xi function correspond to the description of the behavior for a quantum-mechanical operator that I just can’t find a way to describe clearly in under three thousand words.

But mathematicians often speak of the strangeness that mathematical constructs can match reality so well. And here is surely a powerful one. We learned of the Riemann Hypothesis originally by studying how many prime numbers there are compared to the counting numbers. If it’s true, then the physics of the universe may be set up one particular way. Is that not astounding?