May 2013 turned out to be an interesting month for number theory, in that there’ve been big breakthroughs on two long-standing conjectures. Number theory is great fun in part because it’s got many conjectures that anybody can understand but which are really hard to prove. The one that’s gotten the most attention, at least from the web sites that I read which dip into popular mathematics, has been the one about twin primes.

It’s long been suspected that there are infinitely many pairs of “twin primes”, such as 5 and 7, or 11 and 13, or 29 and 31, separated by only two. It’s not proven that there are such, not yet. Yitang Zhang of Harvard has announced proof that there are infinitely many pairings of primes that are no more than 70,000,000 apart. This is admittedly not the tightest bound out there, but it’s better than what there was before. But while there are infinitely many primes — anyone can prove that — how many there are in any fixed-width range tends to decrease, and it would be imaginable to think that the distance between primes just keeps increasing, without bounds, the way that (say) each pair of successive powers of two is farther apart than the previous pair were. But it’s not so, and that’s neat to see.

Less publicized is a proof of Goldbach’s Odd Conjecture. Goldbach’s Conjecture is the famous one that every even number bigger than two can be written as the sum of two primes. An equivalent form would be to say that every whole number — even or odd — larger than five can be written as the sum of three primes. Goldbach’s Odd Conjecture cuts the problem by just saying that every odd whole number greater than five can be written as the sum of three primes. And it’s this which Harald Andres Helfgott claims to have a proof for. (He also claims to have a proof that every odd number greater than seven can be written as the sum of three odd primes, that is, that two isn’t needed for more than single-digit odd numbers.)

While I have let my eyes glaze over the proof — you can download it from arxiv.org — I don’t have any grounds to say whether I believe it’s correct or not. Too much of it is too far out of my field of expertise to say anything too swiftly. But I can explain the rough idea of the proof, and it interests me as in an important way it’s like the famously controversial Four-Color Map Theorem proof of Kenneth Appel and Wolfgang Haken. I don’t imagine this proof — trusting it holds up — will be controversial, though.

The approach Helfgott uses is to show that every odd number above a particular threshold obeys the theorem; and then, check all the numbers below that threshold to see if they work out. In essence, Helfgott checks each possible case for the theorem and shows that it holds, which is exactly how Appel and Haken showed every map can be colored with at most four colors. It’s certainly not the difference in number of cases: Helfgott needs to test every odd number less than 1,000,000, 000,000, 000,000, 000,000, 000,000, which is certainly beyond the ability of any human brain to understand all together, and probably worse than the 2,500 diagrams and 400 pages of microfiche serving as supplements to their 50-page main proof. In either case we’re dependent on machines to do the verification.

But I suppose it’s easier to be convinced that there aren’t any cases overlooked in finding the odd numbers below , and that the program finding the prime numbers which add up to each odd number is working correctly, than it is to be convinced that every possible different map has been thought of and colored in correctly. At least, I can imagine overlooking a map or mistakenly thinking it’s been colored correctly; I can’t imagine overlooking an odd number.

It’s of interest to me that it’s actually been known since 1939 that every large enough odd number is the sum of three prime numbers, so that *in principle* the proof could have been done since Franklin Roosevelt’s second term. However, the earliest bound was the rather intimidatingly large , which if I’ve got this right is a number that takes about seven million digits just to *write* (do please double-check my work), so trying out all the odd numbers below that limit wouldn’t happen. (Consider that a googol only requires a hundred digits to write out, and this is a number something like seventy thousand times as long.) Later work had lowered that boundary; this is the first time, apparently, that it’s gotten low enough to actually test out all the cases below the bound.

Also, it turns out computers can test every number up to for properties like how to write them as the sum of three prime numbers, which is also pretty impressive.

(Isn’t it a bit odd that I say takes “about seven million digits to write”, when I just wrote it — twice — using four digits each time? And yet you know just what I mean, too, and trying to be clear about what I mean would produce a pile of unintelligible words.)

I have a stupid question: Why is this that you can prove the conjecture for all numbers higher than a certain threshold? Why does the proof work for number > 10 to the power of 30, but not for smaller onces? What is special about 10_30?

LikeLike

It’s not a stupid question although it’s one I’m not sure I can answer quite well enough given that a lot of the paper calls on things that I don’t know well.

IfI am reading it correctly, the core idea of the proof is to substitute summations over numbers (as, you might, sum over primes) with integrals. This is an approximation, yes, but if you’re summing over a large enough number of small enough things, that substitution becomes as accurate as you could possibly need. Again, if I have this correctly, then, the bound of comes about because for numbers smaller than that bound, the integral approximation isn’t quite close enough to the summation to guarantee that there aren’t any overlooked numbers.I’d defer happily to the expertise of anyone who does know number theory better and can more confidently follow the course of the proof. I admit I’m intimidated just by some of the symbols (a valentine heart as a subscript? How did

thatcome about?) and, after all, it is a 130-page proof.LikeLike

Thanks a lot! Replacing sums by integral is very common in solid state physics (summing over all electron states…), so I can relate to that. I would not have expected that in pure mathematcs though!

LikeLike

Quite glad to help (assuming I have got it right).

I should say for anyone who’s reading this but never ventured far enough into calculus to look at integrals on purpose, the point of replacing summations with integrals is that, generally, it’s a lot easier to study integrals than it is summations. So if you can replace a summation with an integral you’re probably well-off doing that, although often, you can only make that replacement if it’s a sum over many enough pieces.

LikeLike

“And yet you know just what I mean, too, and trying to be clear about what I mean would produce a pile of unintelligible words”

hm. How about saying ‘its full decmial expansion is nearly seven million digits’ ?

LikeLike

Yeah, you’re right. That works. Sometimes I don’t know how to say things clearly. Thank you.

LikeLike