## Error

This is one of my A to Z words that everyone knows. An error is some mistake, evidence of our human failings, to be minimized at all costs. That’s … well, it’s an attitude that doesn’t let you use error as a tool.

An error is the difference between what we would like to know and what we do know. Usually, what we would like to know is something hard to work out. Sometimes it requires complicated work. Sometimes it requires an infinite amount of work to get exactly right. Who has the time for that?

This is how we use errors. We look for methods that approximate the thing we want, and that estimate how much of an error that method makes. Usually, the method involves doing some basic step some large number of times. And usually, if we did the step more times, the estimate of the error we make will be smaller. My essay “Calculating Pi Less Terribly” shows an example of this. If we add together more terms from that Leibniz formula we get a running total that’s closer to the actual value of π.

This means we have a reliable numerical trick. Suppose we have a formula, such as that Leibniz formula for π, which requires an infinite amount of work but would give us an answer we wanted. And suppose we know an estimate for the error we get if we calculate only finitely many terms. We might decide that we could tolerate an error of some convenient small number, like 0.001. Then we would calculate enough terms that we know our error can’t be bigger than whatever we chose. And thus we’re close enough to the right answer for whatever our needs.

But the error estimate can also give us exactly correct answers. Suppose we have some formula, such as the Leibniz formula, for something interesting. We can show that the formula is equal to some number. Let me give that some-number the name L. If I don’t give that some-number a name the next few sentences are going to be too hard to read.

Pick some arbitrary tolerated error. That word “arbitrary” is important. I know nothing of what tolerated error is picked there; the only requirement I have is that it’s larger than zero. It may be a very large number. It may be a very small number. But it is some number bigger than zero, and that’s all we know or care about it.

Suppose we can show that it’s always the case that, when we add together enough terms in our formula, we get numbers which are no more than that tolerated error away from L. If the tolerance for error is smaller, we probably need more terms. That’s all right. As long as we eventually get, and stay, no more than our tolerated error away from L, then we’re done. Because we’ve proven that our formula has add up to something exactly equal to L.

Why? Well, how do we know two numbers are different? One way is that the difference between them is something bigger than zero. 4 and 3 are a distance of 1 apart. 5.6 and 5.75 are a distance of 0.15 apart. -9.81 and -9.79 are a difference of 0.03 apart.

But our number L, and whatever number our formula adds up to if we take enough terms, are no farther apart than our arbitrary tolerated error. That tolerated error might be 86. That tolerated error might be 0.25. That tolerated error might be 0.000 000 000 000 000 000 173. It can be *any* positive number. And we, supposedly, proved that the difference between our formula’s sum and the number L is smaller than any of these numbers.

Since our tolerated error was arbitrary, it can be any and every positive number. And this means the difference between the number we got in our formula and the number L has to be smaller than any positive number. The only thing the difference between our formula’s sum and the number L *can be* is zero. And if the difference between our formula’s sum and the number L is zero, then they’re equal.

And this is yet another stock mathematician’s trick. We can show two things are equal by showing the difference between them has to be smaller than some arbitrary positive number. Somehow, we can often do that. And so we prove two things are equal to one another.

Whenever I make an error – my partner likes to tell me that I “broke math”. This all stemming from the one time I was given all the steps to the problem and still got an answer that wasn’t even close.

LikeLike

Aw, that sort of thing happens to everybody. Mathematicians especially. There’s a bit of folklore that says to never give an arithmetic problem to a mathematician because even if you ever do get an answer from it, it won’t be anything near right.

LikeLike