Polynomials turn up all over the place. There are multiple good reasons for this. For one, suppose we have any continuous function that we want to study. (“Continuous” has a technical definition, although if you imagine what we might mean by that in ordinary English — that we could draw it without having to lift pen from paper — you’ve got it, apart from freak cases designed to confuse students taking real analysis by making continuous functions that don’t look anything like something you could ever draw, which is jolly good fun until the grades are returned.) If we’re willing to accept a certain margin of error around that function, though, we can always find a polynomial that’s within that margin of error of the function we really want to study. I have read, albeit in secondary sources, that for a while in the 18th century it was thought that a mathematician could just as well define a function as “something that a polynomial can approximate”.

Another good reason is that polynomials are easy to work with, in the way that calculus looks at things. Two of the core elements of calculus are figuring out how fast a function changes, and figuring out how much area there is underneath a function. For polynomials, finding this rate of change and finding this area underneath are easy, following nice and exact rules that don’t require any clever tricks.

Once we’ve learned how to do the basic manipulations of polynomials — which amount to evaluating them for particular points, and then maybe adding two polynomials together, or multiplying one polynomial by another — even for simple polynomials, we’ve got them mastered for all the most complicated ones too. Polynomials can, in principle, be made up of hundreds of thousands of terms (I can’t say any sane person would do that), and they’re not any *harder* to work with that way. They may be longer, but they’re not harder.

There are some functions that turn out to be interesting and just not be well-approximated by polynomials. However, if we allow the idea of dividing one polynomial by another, a lot of *those* turn out to be possible too.

Polynomial approaches work so very well that sometimes even when we have things that don’t look a thing like polynomials we’ll use the language and conventions and standards built up by polynomials for them. For example, if we’re interested in approximating a function that’s periodic — which comes back to repeat itself, forever — we may use the “trigonometric polynomials”. The excuse for that name is mostly that we can write out, in abstract form, this sum of a large number of periodic functions using notation that looks a lot like what we use for normal polynomials.

We’ll go on to making polynomial interpolations. In fact, we have already, with those constant approximations used last week. Those are legitimate polynomials, where the term we were calling , the one not multiplied by a power of x, is the only coefficient which isn’t zero. Oh, when we have the function jump that’s not a polynomial — polynomials just don’t *do* sudden leaps — but that’s the “piecewise” part of “piecewise constant”. We follow one polynomial for a piece, then follow another polynomial for another piece; we might follow another polynomial yet for a third piece. Within each stretch, we had last week a polynomial of constant value. We could also have an interpolation just as piecewise, with different polynomials defined on these different stretches.

And I haven’t yet named the *best* thing of all about polynomials.

## One thought on “Why Do We Like Polynomials?”