## How to Add Up Powers of Numbers

Do you need to know the formula to tell you what the sum of the first N counting numbers, raised to a power? No, you do not. Not really. It can save a bit of time to know the sum of the numbers raised to the first power. Most mathematicians would know it, or be able to recreate it fast enough: $\sum_{n = 1}^{N} n = 1 + 2 + 3 + \cdots + N = \frac{1}{2}N\left(N + 1\right)$

But there are similar formulas to add up, say, the counting numbers squared, or cubed, or so. And a toot on Mathstodon, the mathematics-themed instance of social network Mastodon, makes me aware of a cute paper about this. In it Dr Alessandro Mariani describes A simple mnemonic to compute sums of powers.

It’s a neat one. Mariani describes a way to use knowledge of the sum of numbers to the first power to generate a formula for the sum of squares. And then to use the sum of squares formula to generate the sum of cubes. The sum of cubes then lets you get the sub of fourth-powers. And so on. This takes a while to do if you’re interested in the sum of twentieth powers. But do you know how many times you’ll ever need to generate that formula? Anyway, as Mariani notes, this sort of thing is useful if you find yourself at a mathematics competition. Or some other event where you can’t just have the computer calculate this stuff.

Mariani’s process is a great one. Like many mnemonics it doesn’t make literal sense. It expects one to integrate and differentiate polynomials. Anyone likely to be interested in a formula for the sums of twelfth powers knows how to do those in their sleep. But they’re integrating and differentiating polynomials for which, in context, the integrals and derivatives don’t exist. Or at least don’t mean anything. That’s all right. If all you want is the right answer, it’s okay to get there by a wrong method. At least if you verify the answer is right, which the last section of Mariani’s paper does. So, give it a read if you’d like to see a neat mathematical trick to a maybe useful result.

## My All 2020 Mathematics A to Z: Yang Hui

Nobody had particular suggestions for the letter ‘Y’ this time around. It’s a tough letter to find mathematical terms for. It doesn’t even lend itself to typography or wordplay the way ‘X’ does. So I chose to do one more biographical piece before the series concludes. There were twists along the way in writing.

Before I get there, I have a word for a longtime friend, Porsupah Ree. Among her hobbies is watching, and photographing, the wild rabbits. A couple years back she got a great photograph. It’s one that you may have seen going around social media with a caption about how “everybody was bun fu fighting”. She’s put it up on Redbubble, so you can get the photograph as a print or a coffee mug or a pillow, or many other things. And you can support her hobbies of rabbit photography and eating. Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

# Yang Hui.

Several problems beset me in writing about this significant 13th-century Chinese mathematician. One is my ignorance of the Chinese mathematical tradition. I have little to guide me in choosing what tertiary sources to trust. Another is that the tertiary sources know little about him. The Complete Dictionary of Scientific Biography gives a dire verdict. “Nothing is known about the life of Yang Hui, except that he produced mathematical writings”. MacTutor’s biography gives his lifespan as from circa 1238 to circa 1298, on what basis I do not know. He seems to have been born in what’s now Hangzhou, near Shanghai. He seems to have worked as a civil servant. This is what I would have imagined; most scholars then were. It’s the sort of job that gives one time to write mathematics. Also he seems not to have been a prominent civil servant; he’s apparently not listed in any dynastic records. After that, we need to speculate.

E F Robertson, writing the MacTutor biography, speculates that Yang Hui was a teacher. That he was writing to explain mathematics in interesting and helpful ways. I’m not qualified to judge Robertson’s conclusions. And Robertson notes that’s not inconsistent with Yang being a civil servant. Robertson’s argument is based on Yang’s surviving writings, and what they say about the demonstrated problems. There is, for example, 1274’s Cheng Chu Tong Bian Ben Mo. Robertson translates that title as Alpha and omega of variations on multiplication and division. I try to work out my unease at having something translated from Chinese as “Alpha and Omega”. That is my issue. Relevant here is that a syllabus prefaces the first chapter. It provides a schedule and series of topics, as well as a rationale for why this plan.

Was Yang Hui a discoverer of significant new mathematics? Or did he “merely” present what was already known in a useful way? This is not to dismiss him; we have the same questions about Euclid. He is held up as among the great Chinese mathematicians of the 13th century, a particularly fruitful time and place for mathematics. How much greatness to assign to original work and how much to good exposition is unanswerable with what we know now.

Consider for example the thing I’ve featured before, Yang Hui’s Triangle. It’s the arrangement of numbers known in the west as Pascal’s Triangle. Yang provides the earliest extant description of the triangle and how to form it and use it. This in the 1261 Xiangjie jiuzhang suanfa (Detailed analysis of the mathematical rules in the Nine Chapters and their reclassifications). But in it, Yang Hui says he learned the triangle from a treatise by Jia Xian, Huangdi Jiuzhang Suanjing Xicao (The Yellow Emperor’s detailed solutions to the Nine Chapters on the Mathematical Art). Jia Xian lived in the 11th century; he’s known to have written two books, both lost. Yang Hui’s commentary gives us a fair idea what Jia Xian wrote about. But we’re limited in judging what was Jia Xian’s idea and what was Yang Hui’s inference or what.

The Nine Chapters referred to is Jiuzhang suanshu. An English title is Nine Chapters on the Mathematical Art. The book is a 246-problem handbook of mathematics that dates back to antiquity. It’s impossible to say when the Nine Chapters was first written. Liu Hui, who wrote a commentary on the Nine Chapters in 263 CE, thought it predated the Qin ruler Shih Huant Ti’s 213 BCE destruction of all books. But the book — and the many commentaries on the book — served as a centerpiece for Chinese mathematics for a long while. Jia Xian’s and Yang Hui’s work was part of this tradition.

Yang Hui’s Detailed Analysis covers the Nine Chapters. It goes on for three chapters, more about geometry and fundamentals of mathematics. Even how to classify the problems. He had further works. In 1275 Yang published Practical mathematical rules for surveying and Continuation of ancient mathematical methods for elucidating strange properties of numbers. (I’m not confident in my ability to give the Chinese titles for these.) The first title particularly echoes how in the Western tradition geometry was born of practical concerns.

The breadth of topics covers, it seems to me, a decent modern (American) high school mathematics education. The triangle, and the binomial expansions it gives us, fit that. Yang writes about more efficient ways to multiply on the abacus. He writes about finding simultaneous solutions to sets of equations. And through a technique that amounts to finding the matrix of coefficients for the equations, and its determinant. He writes about finding the roots for cubic and quartic equations. The technique is commonly known in the west as Horner’s Method, a technique of calculating divided differences. We see the calculating of areas and volumes for regular shapes.

And sequences. He found the sum of the squares of natural numbers followed a rule: $1^2 + 2^2 + 3^2 + \cdots + n^2 = \frac{1}{3}\cdot n\cdot (n + 1)\cdot (n + \frac{1}{2})$

This by a method of “piling up squares”, described some here by the Mathematical Association of America. (Me, I spent 40 minutes that could have gone into this essay convincing myself the formula was right. I couldn’t make myself believe the $(n + \frac{1}{2})$ part and had to work it out a couple different ways.)

And then there’s magic squares, and magic circles. He seems to have found them, as professional mathematicians today would, good ways to interest people in calculation. Not magic; he called them something like number diagrams. But he gives magic squares from three-by-three all the way to ten-by-ten. We don’t know of earlier examples of Chinese mathematicians writing about the larger magic squares. But Yang Hui doesn’t claim to be presenting new work. He also gives magic circles. The simplest is a web of seven intersecting circles, each with four numbers along the circle and one at its center. The sum of the center and the circumference numbers are 65 for all seven circles. Is this significant? No; merely fun.

Grant this breadth of work. Is he significant? I learned this year that familiar names might have been obscure until quite recently. The record is once again ambiguous. Other mathematicians wrote about Yang Hui’s work in the early 1300s. Yang Hui’s works were printed in China in 1378, says the Complete Dictionary of Scientific Biography, and reprinted in Korea in 1433. They’re listed in a 1441 catalogue of the Ming Imperial Library. Seki Takakazu, a towering figure in 17th century Japanese mathematics, copied the Korean text by hand. Yet Yang Hui’s work seems to have been lost by the 18th century. Reconstructions, from commentaries and encyclopedias, started in the 19th century. But we don’t have everything we know he wrote. We don’t even have a complete text of Detailed Analysis. This is not to say he wasn’t influential. All I could say is there seems to have been a time his influence was indirect.

I am sorry to offer so much uncertainty about Yang Hui. I had hoped to provide a fuller account. But we always only know thin slivers of life, and try to use those to know anything.

Next week I hope to finish this year’s A-to-Z project. The whole All 2020 A-to-Z should be gathered at this link. And all the essays from every A-to-Z series should be at this link. I haven’t decided whether I’ll publish on Wednesday or Friday. It’ll depend what I can get done over the weekend; we’ll see. Thank you for reading.

## The Summer 2017 Mathematics A To Z: L-function

I’m brought back to elliptic curves today thanks to another request from Gaurish, of the For The Love Of Mathematics blog. Interested in how that’s going to work out? Me too. Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US\$10.

So stop me if you’ve heard this one before. We’re going to make something interesting. You bring to it a complex-valued number. Anything you like. Let me call it ‘s’ for the sake of convenience. I know, it’s weird not to call it ‘z’, but that’s how this field of mathematics developed. I’m going to make a series built on this. A series is the sum of all the terms in a sequence. I know, it seems weird for a ‘series’ to be a single number, but that’s how that field of mathematics developed. The underlying sequence? I’ll make it in three steps. First, I start with all the counting numbers: 1, 2, 3, 4, 5, and so on. Second, I take each one of those terms and raise them to the power of your ‘s’. Third, I take the reciprocal of each of them. That’s the sequence. And when we add —

Yes, that’s right, it’s the Riemann-Zeta Function. The one behind the Riemann Hypothesis. That’s the mathematical conjecture that everybody loves to cite as the biggest unsolved problem in mathematics now that we know someone did something about Fermat’s Last Theorem. The conjecture is about what the zeroes of this function are. What values of ‘s’ make this sum equal to zero? Some boring ones. Zero, negative two, negative four, negative six, and so on. It has a lot of non-boring zeroes. All the ones we know of have an ‘s’ with a real part of ½. So far we know of at least 36 billion values of ‘s’ that make this add up to zero. They’re all ½ plus some imaginary number. We conjecture that this isn’t coincidence and all the non-boring zeroes are like that. We might be wrong. But it’s the way I would bet.

Anyone who’d be reading this far into a pop mathematics blog knows something of why the Riemann Hypothesis is interesting. It carries implications about prime numbers. It tells us things about a host of other theorems that are nice to have. Also they know it’s hard to prove. Really, really hard.

Ancient mathematical lore tells us there are a couple ways to solve a really, really hard problem. One is to narrow its focus. Try to find as simple a case of it as you can solve. Maybe a second simple case you can solve. Maybe a third. This could show you how, roughly, to solve the general problem. Not always. Individual cases of Fermat’s Last Theorem are easy enough to solve. You can show that $a^3 + b^3 = c^3$ doesn’t have any non-boring answers where a, b, and c are all positive whole numbers. Same with $a^5 + b^5 = c^5$, though it takes longer. That doesn’t help you with the general $a^n + b^n = c^n$.

There’s another approach. It sounds like the sort of crazy thing Captain Kirk would get away with. It’s to generalize, to make a bigger, even more abstract problem. Sometimes that makes it easier.

For the Riemann-Zeta Function there’s one compelling generalization. It fits into that sequence I described making. After taking the reciprocals of integers-raised-to-the-s-power, multiply each by some number. Which number? Well, that depends on what you like. It could be the same number every time, if you like. That’s boring, though. That’s just the Riemann-Zeta Function times your number. It’s more interesting if what number you multiply by depends on which integer you started with. (Do not let it depend on ‘s’; that’s more complicated than you want.) When you do that? Then you’ve created an L-Function.

Specifically, you’ve created a Dirichlet L-Function. Dirichlet here is Peter Gustav Lejeune Dirichlet, a 19th century German mathematician who got his name on like everything. He did major work on partial differential equations, on Fourier series, on topology, in algebra, and on number theory, which is what we’d call these L-functions. There are other L-Functions, with identifying names such as Artin and Hecke and Euler, which get more directly into group theory. They look much like the Dirichlet L-Function. In building the sequence I described in the top paragraph, they do something else for the second step.

The L-Function is going to look like this: $L(s) = \sum_{n \ge 1}^{\infty} a_n \cdot \frac{1}{n^s}$

The sigma there means to evaluate the thing that comes after it for each value of ‘n’ starting at 1 and increasing, by 1, up to … well, something infinitely large. The $a_n$ are the numbers you’ve picked. They’re some value that depend on the index ‘n’, but don’t depend on the power ‘s’. This may look funny but it’s a standard way of writing the terms in a sequence.

An L-Function has to meet some particular criteria that I’m not going to worry about here. Look them up before you get too far into your research. These criteria give us ways to classify different L-Functions, though. We can describe them by degree, much as we describe polynomials. We can describe them by signature, part of those criteria I’m not getting into. We can describe them by properties of the extra numbers, the ones in that fourth step that you multiply the reciprocals by. And so on. LMFDB, an encyclopedia of L-Functions, lists eight or nine properties usable for a taxonomy of these things. (The ambiguity is in what things you consider to depend on what other things.)

What makes this interesting? For one, everything that makes the Riemann Hypothesis interesting. The Riemann-Zeta Function is a slice of the L-Functions. But there’s more. They merge into elliptic curves. Every elliptic curve corresponds to some L-Function. We can use the elliptic curve or the L-Function to prove what we wish to show. Elliptic curves are subject to group theory; so, we can bring group theory into these series.

And then it gets deeper. It always does. Go back to that formula for the L-Function like I put in mathematical symbols. I’m going to define a new function. It’s going to look a lot like a polynomial. Well, that L(s) already looked a lot like a polynomial, but this is going to look even more like one.

Pick a number τ. It’s complex-valued. Any number. All that I care is that its imaginary part be positive. In the trade we say that’s “in the upper half-plane”, because we often draw complex-valued numbers as points on a plane. The real part serves as the horizontal and the imaginary part serves as the vertical axis.

Now go back to your L-Function. Remember those $a_n$ numbers you picked? Good. I’m going to define a new function based on them. It looks like this: $f(\tau) = \sum_{n \ge 1}^{\infty} a_n \left( e^{2 \pi \imath \tau}\right)^n$

You see what I mean about looking like a polynomial? If τ is a complex-valued number, then $e^{2 \pi \imath \tau}$ is just another complex-valued number. If we gave that a new name like ‘z’, this function would look like the sum of constants times z raised to positive powers. We’d never know it was any kind of weird polynomial.

Anyway. This new function ‘f(τ)’ has some properties. It might be something called a weight-2 Hecke eigenform, a thing I am not going to explain without charging someone by the hour. But see the logic here: every elliptic curve matches with some kind of L-Function. Each L-Function matches with some ‘f(τ)’ kind of function. Those functions might or might not be these weight-2 Hecke eigenforms.

So here’s the thing. There was a big hypothesis formed in the 1950s that every rational elliptic curve matches to one of these ‘f(τ)’ functions that’s one of these eigenforms. It’s true. It took decades to prove. You may have heard of it, as the Taniyama-Shimura Conjecture. In the 1990s Wiles and Taylor proved this was true for a lot of elliptic curves, which is what proved Fermat’s Last Theorem after all that time. The rest of it was proved around 2000.

As I said, sometimes you have to make your problem bigger and harder to get something interesting out of it.

I mentioned this above. LMFDB is a fascinating site worth looking at. It’s got a lot of L-Function and Riemann-Zeta function-related materials.

## Something Cute I Never Noticed Before About Infinite Sums

This is a trifle, for which I apologize. I’ve been sick. But I ran across this while reading Carl B Boyer’s The History of the Calculus and its Conceptual Development. This is from the chapter “A Century Of Anticipation”, developments leading up to Newton and Leibniz and The Calculus As We Know It. In particular, while working out the indefinite integrals for simple powers — x raised to a whole number — John Wallis, whom you’ll remember from such things as the first use of the ∞ symbol and beating up Thomas Hobbes for his lunch money, noted this: $\frac{0 + 1}{1 + 1} = \frac{1}{2}$

Which is fine enough. But then Wallis also noted that $\frac{0 + 1 + 2}{2 + 2 + 2} = \frac{1}{2}$

And furthermore that $\frac{0 + 1 + 2 + 3}{3 + 3 + 3 + 3} = \frac{1}{2}$ $\frac{0 + 1 + 2 + 3 + 4}{4 + 4 + 4 + 4 + 4} = \frac{1}{2}$ $\frac{0 + 1 + 2 + 3 + 4 + 5}{5 + 5 + 5 + 5 + 5 + 5} = \frac{1}{2}$

And isn’t that neat? Wallis goes on to conclude that this is true not just for finitely many terms in the numerator and denominator, but also if you carry on infinitely far. This seems like a dangerous leap to make, but they treated infinities and infinitesimals dangerously in those days.

What makes this work is — well, it’s just true; explaining how that can be is kind of like explaining how it is circles have a center point. All right. But we can prove that this has to be true at least for finite terms. A sum like 0 + 1 + 2 + 3 is an arithmetic progression. It’s the sum of a finite number of terms, each of them an equal difference from the one before or the one after (or both).

Its sum will be equal to the number of terms times the arithmetic mean of the first and last. That is, it’ll be the number of terms times the sum of the first and the last terms and divided that by two. So that takes care of the numerator. If we have the sum 0 + 1 + 2 + 3 + up to whatever number you like which we’ll call ‘N’, we know its value has to be (N + 1) times N divided by 2. That takes care of the numerator.

The denominator, well, that’s (N + 1) cases of the number N being added together. Its value has to be (N + 1) times N. So the fraction is (N + 1) times N divided by 2, itself divided by (N + 1) times N. That’s got to be one-half except when N is zero. And if N were zero, well, that fraction would be 0 over 0 and we know what kind of trouble that is.

It’s a tiny bit, although you can use it to make an argument about what to expect from $\int{x^n dx}$, as Wallis did. And it delighted me to see and to understand why it should be so.

## Reading the Comics, June 25, 2016: What The Heck, Why Not Edition

I had figured to do Reading the Comics posts weekly, and then last week went and gave me too big a flood of things to do. I have no idea what the rest of this week is going to look like. But given that I had four strips dated before last Sunday I’m going to err on the side of posting too much about comic strips.

Scott Metzger’s The Bent Pinky for the 24th uses mathematics as something that dogs can be adorable about not understanding. Thus all the heads tilted, as if it were me in a photograph. The graph here is from economics, which has long had a challenging relationship with mathematics. This particular graph is qualitative; it doesn’t exactly match anything in the real world. But it helps one visualize how we might expect changes in the price of something to affect its sales. A graph doesn’t need to be precise to be instructional.

Dave Whamond’s Reality Check for the 24th is this essay’s anthropomorphic-numerals joke. And it’s a reminder that something can be quite true without being reassuring. It plays on the difference between “real” numbers and things that really exist. It’s hard to think of a way that a number such as two could “really” exist that doesn’t also allow the square root of -1 to “really” exist.

And to be a bit curmudgeonly, it’s a bit sloppy to speak of “the square root of negative one”, even though everyone does. It’s all right to expand the idea of square roots to cover stuff it didn’t before. But there’s at least two numbers that would, squared, equal -1. We usually call them i and -i. Square roots naturally have this problem,. Both +2 and -2 squared give us 4. We pick out “the” square root by selecting the positive one of the two. But neither i nor -i is “positive”. (Don’t let the – sign fool you. It doesn’t count.) You can’t say either i or -i is greater than zero. It’s not possible to define a “greater than” or “less than” for complex-valued numbers. And that’s even before we get into quaternions, in which we summon two more “square roots” of -1 into existence. Octonions can be even stranger. I don’t blame 1 for being worried.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 24th is a pleasant bit of pop-mathematics debunking. I’ve explained in the past how I’m a doubter of the golden ratio. The Fibonacci Sequence has a bit more legitimate interest to it. That’s sequences of numbers in which the next term is the sum of the previous two terms. The famous one of that is 1, 1, 2, 3, 5, 8, 13, 21, et cetera. It may not surprise you to know that the Fibonacci Sequence has a link to the golden ratio. As it goes on, the ratio between one term and the next one gets close to the golden ratio.

The Harmonic Series is much more deeply weird. A series is the number we get from adding together everything in a sequence. The Harmonic Series grows out of the first sequence you’d imagine ever adding up. It’s 1 plus 1/2 plus 1/3 plus 1/4 plus 1/5 plus 1/6 plus … et cetera. The first time you hear of this you get the surprise: this sum doesn’t ever stop piling up. We say it ‘diverges’. It won’t on your computer; the floating-point arithmetic it does won’t let you add enormous numbers like ‘1’ to tiny numbers like ‘1/531,325,263,953,066,893,142,231,356,120’ and get the right answer. But if you actually added this all up, it would.

The proof gets a little messy. But it amounts to this: 1/2 plus 1/3 plus 1/4? That’s more than 1. 1/5 + 1/6 + 1/7 + 1/8 + 1/9 + 1/10 + 1/11 + 1/12? That’s also more than 1. 1/13 + 1/14 + 1/15 + et cetera up through + 1/32 + 1/33 + 1/34 is also more than 1. You need to pile up more and more terms each time, but a finite string of these numbers will add up to more than 1. So the whole series has to be more than 1 + 1 + 1 + 1 + 1 … and so more than any finite number.

That’s all amazing enough. And then the series goes on to defy all kinds of intuition. Obviously dropping a couple of terms from the series won’t change whether it converges or diverges. Multiplying alternating terms by -1, so you have (say) 1 – 1/2 + 1/3 – 1/4 + 1/5 et cetera produces something that looks like it converges. It equals the natural logarithm of 2. But if you take those terms and rearrange them, you can produce any real number, positive or negative, that you want.

And, as Weinersmith describes here, if you just skip the correct set of terms, you can make the sum converge. The ones with 9 in the denominator will be, then, 1/9, 1/19, 1/29, 1/90, 1/91, 1/92, 1/290, 1/999, those sorts of things. Amazing? Yes. Absurd? I suppose so. This is why mathematicians learn to be very careful when they do anything, even addition, infinitely many times.

John Deering’s Strange Brew for the 25th is a fear-of-mathematics joke. The sign the warrior’s carrying is legitimate algebra, at least so far as it goes. The right-hand side of the equation gets cut off. In time, it would get to the conclusion that x equals –19/2, or -9.5.

## Terrible And Less-Terrible Things with Pi

We are coming around “Pi Day”, the 14th of March, again. I don’t figure to have anything thematically appropriate for the day. I figure to continue the Leap Day 2016 Mathematics A To Z, and I don’t tend to do a whole two posts in a single day. Two just seems like so many, doesn’t it?

But I would like to point people who’re interested in some π-related stuff to what I posted last year. Those posts were:

• Calculating Pi Terribly, in which I show a way to work out the value of π that’s fun and would take forever. I mean, yes, properly speaking they all take forever, but this takes forever just to get a couple of digits right. It might be fun to play with but don’t use this to get your digits of π. Really.
• Calculating Pi Less Terribly, in which I show a way to do better. This doesn’t lend itself to any fun side projects. It’s just calculations. But it gets you accurate digits a lot faster.

## Calculating Pi Less Terribly

Back on “Pi Day” I shared a terrible way of calculating the digits of π. It’s neat in principle, yes. Drop a needle randomly on a uniformly lined surface. Keep track of how often the needle crosses over a line. From this you can work out the numerical value of π. But it’s a terrible method. To be sure that π is about 3.14, rather than 3.12 or 3.38, you can expect to need to do over three and a third million needle-drops. So I described this as a terrible way to calculate π.

A friend on Twitter asked if it was worse than adding up 4 * (1 – 1/3 + 1/5 – 1/7 + … ). It’s a good question. The answer is yes, it’s far worse than that. But I want to talk about working π out that way. Tom Batiuk’s Funky Winkerbean for the 17th of May, 2015. The worst part of this strip is Science Teacher Mark Twain will go back to the teachers’ lounge and complain that none of his students got it. This isn’t part of the main post. But the comic strip happened to mention π on a day when I’m talking about π so who am I to resist coincidence?
Continue reading “Calculating Pi Less Terribly”