This one I saw through John Allen Paulos’s twitter feed. He points out that it’s like the Collatz conjecture but is, in fact, proven. If you try this yourself don’t make the mistake of giving up too soon. You might figure, like start with 12. Sum the squares of its digits and you get 5, which is neither 1 nor anything in that 4-16-37-58-89-145-42-20 cycle. Not so! Square 5 and you get 25. Square those digits and add them and you get 29. Square those digits and add them and you get 40. And what comes next?
This is about a proof of Fermat’s Theorem of Sums of Two Squares. According to it, a prime number — let’s reach deep into the alphabet and call it p — can be written as the sum of two squares if and only if p is one more than a whole multiple of four. It’s a proof by using fixed point methods. This is a fun kind of proof, at least to my sense of fun. It’s an approach that’s got a clear physical interpretation. Imagine picking up a (thin) patch of bread dough, stretching it out some and maybe rotating it, and then dropping it back on the board. There’s at least one bit of dough that’s landed in the same spot it was before. Once you see this you will never be able to just roll out dough the same way. So here the proof involves setting up an operation on integers which has a fixed point, and that the fixed point makes the property true.
John D Cook, who runs a half-dozen or so mathematics-fact-of-the-day Twitter feeds, looks into calculating the volume of an egg. It involves calculus, as finding the volume of many interesting shapes does. I am surprised to learn the volume can be written out as a formula that depends on the shape of the egg. I would have bet that it couldn’t be expressed in “closed form”. This is a slightly flexible term. It’s meant to mean the thing can be written using only normal, familiar functions. However, we pretend that the inverse hyperbolic tangent is a “normal, familiar” function.
For example, there’s the surface area of an egg. This can be worked out too, again using calculus. It can’t be written even with the inverse hyperbolic cotangent, so good luck. You have to get into numerical integration if you want an answer humans can understand.
Stand on the edge of a plot of land. Walk along its boundary. As you walk the edge pay attention. Note how far you walk before changing direction, even in the slightest. When you return to where you started consult your notes. Contained within them is the area you circumnavigated.
If that doesn’t startle you perhaps you haven’t thought about how odd that is. You don’t ever touch the interior of the region. You never do anything like see how many standard-size tiles would fit inside. You walk a path that is as close to one-dimensional as your feet allow. And encoded in there somewhere is an area. Stare at that incongruity and you realize why integrals baffle the student so. They have a deep strangeness embedded in them.
We who do mathematics have always liked integration. They grow, in the western tradition, out of geometry. Given a shape, what is a square that has the same area? There are shapes it’s easy to find the area for, given only straightedge and compass: a rectangle? Easy. A triangle? Just as straightforward. A polygon? If you know triangles then you know polygons. A lune, the crescent-moon shape formed by taking a circular cut out of a circle? We can do that. (If the cut is the right size.) A circle? … All right, we can’t do that, but we spent two thousand years trying before we found that out for sure. And we can do some excellent approximations.
That bit of finding-a-square-with-the-same-area was called “quadrature”. The name survives, mostly in the phrase “numerical quadrature”. We use that to mean that we computed an integral’s approximate value, instead of finding a formula that would get it exactly. The otherwise obvious choice of “numerical integration” we use already. It describes computing the solution of a differential equation. We’re not trying to be difficult about this. Solving a differential equation is a kind of integration, and we need to do that a lot. We could recast a solving-a-differential-equation problem as a find-the-area problem, and vice-versa. But that’s bother, if we don’t need to, and so we talk about numerical quadrature and numerical integration.
Integrals are built on two infinities. This is part of why it took so long to work out their logic. One is the infinity of number; we find an integral’s value, in principle, by adding together infinitely many things. The other is an infinity of smallness. The things we add together are infinitesimally small. That we need to take things, each smaller than any number yet somehow not zero, and in such quantity that they add up to something, seems paradoxical. Their geometric origins had to be merged into that of arithmetic, of algebra, and it is not easy. Bishop George Berkeley made a steady name for himself in calculus textbooks by pointing this out. We have worked out several logically consistent schemes for evaluating integrals. They work, mostly, by showing that we can make the error caused by approximating the integral smaller than any margin we like. This is a standard trick, or at least it is, now that we know it.
That “in principle” above is important. We don’t actually work out an integral by finding the sum of infinitely many, infinitely tiny, things. It’s too hard. I remember in grad school the analysis professor working out by the proper definitions the integral of 1. This is as easy an integral as you can do without just integrating zero. He escaped with his life, but it was a close scrape. He offered the integral of x as a way to test our endurance, without actually doing it. I’ve never made it through that.
The greatest tool we have on our side is the Fundamental Theorem of Calculus. Even the name promises it’s the greatest tool we might have. This rule tells us how to connect integrating a function to differentiating another function. If we can find a function whose derivative is the thing we want to integrate, then we have a formula for the integral. It’s that function we found. What a fantastic result.
The trouble is it’s so hard to find functions whose derivatives are the thing we wanted to integrate. There are a lot of functions we can find, mind you. If we want to integrate a polynomial it’s easy. Sine and cosine and even tangent? Yeah. Logarithms? A little tedious but all right. A constant number raised to the power x? Also tedious but doable. A constant number raised to the power x2? Hold on there, that’s madness. No, we can’t do that.
There is a weird grab-bag of functions we can find these integrals for. They’re mostly ones we can find some integration trick for. An integration trick is some way to turn the integral we’re interested in into a couple of integrals we can do and then mix back together. A lot of a Freshman Calculus course is a heap of tricks we’ve learned. They have names like “u-substitution” and “integration by parts” and “trigonometric substitution”. Some of them are really exotic, such as turning a single integral into a double integral because that leads us to something we can do. And there’s something called “differentiation under the integral sign” that I don’t know of anyone actually using. People know of it because Richard Feynman, in his fun memoir What Do You Care What Other People Think: 250 Pages Of How Awesome I Was In Every Situation Ever, mentions how awesome it made him in so many situations. Mathematics, physics, and engineering nerds are required to read this at an impressionable age, so we fall in love with a technique no textbook ever mentions. Sorry.
I’ve written about all this as if we were interested just in areas. We’re not. We like calculating lengths and volumes and, if we dare venture into more dimensions, hypervolumes and the like. That’s all right. If we understand how to calculate areas, we have the tools we need. We can adapt them to as many or as few dimensions as we need. By weighting integrals we can do calculations that tell us about centers of mass and moments of inertial, about the most and least probable values of something, about all quantum mechanics.
As often happens, this powerful tool starts with something anyone might ponder: what size square has the same area as this other shape? And then think seriously about it.
A friend took me up last night on my offer to help with any mathematics she was unsure about. I’m happy to do it, though of course it came as I was trying to shut things down for bed. But that part was inevitable and besides the exam was today. I thought it worth sharing here, though.
There’s going to be some calculus in this. There’s no avoiding that. If you don’t know calculus, relax about what the symbols exactly mean. It’s a good trick. Pretend anything you don’t know is just a symbol for “some mathematics thingy I can find out about later, if I need to, and I don’t necessarily have to”.
“Integration by parts” is one of the standard tricks mathematicians learn in calculus. It comes in handy if you want to integrate a function that itself looks like the product of two other functions. You find the integral of a function by breaking it up into two parts, one of which you differentiate and one of which you integrate. This gives you a product of functions and then a new integral to do. A product of functions is easy to deal with. The new integral … well, if you’re lucky, it’s an easier integral than you started with.
As you learn integration by parts you learn to look ways to break up functions so the new integral is easier. There’s no hard and fast rule for this. But bet on “the part that has a polynomial in it” as the part that’s better differentiated. “The part that has sines and cosines in it” is probably the part that’s better integrated. An exponential, like 2x, is as easily differentiated as integrated. The exponential of a function, like say 2x2, is better differentiated. These usually turn out impossible to integrate anyway. At least impossible without using crazy exotic functions.
So your classic integration-by-parts problem gives you an expression like this:
If you weren’t a mathematics major that might not look better to you, what with it still having integrals and sines and stuff in it. But ask your mathematics friend. She’ll tell you. The thing on the right-hand side is way better. That last term, the integral of the sine of x? She can do that in her sleep. It barely counts as work, at least by the time you’ve got in class to doing integration by parts. It’ll be .
But sometimes, especially if the function being integrated — the “integrand”, by the way, and good luck playing that in Scrabble — is a bunch of trig functions and exponentials, you get some sad situation like so:
That is, the thing we wanted to integrate, on the left, turns up on the right too. The student sits down, feeling the futility of modern existence. We’re stuck with the original problem all over again and we’re short of tools to do something about it.
This is the point my friend was confused by, and is the bit of dark magic I want to talk about here. We’re not stumped! We can fall back on one of those mathematics tricks we are always allowed to do. And it’s a trick that’s so simple it seems like it can’t do anything.
It’s substitution. We are always allowed to substitute one thing for something else that’s equal to it. So in that above equation, what can we substitute, and for what? … Well, nothing in that particular bunch of symbols. We’re going to introduce a new one. It’s going to be the value of the integral we want to evaluate. Since it’s an integral, I’m going to call it ‘I’. You don’t have to call it that, but you’re going to anyway. It doesn’t need a more thoughtful name.
So I shall define:
The triple-equals-sign there is an extravagance, I admit. But it’s a common one. Mathematicians use it to say “this is defined to be equal to that”. Granted, that’s what the = sign means. But the triple sign connotes how we emphasize the definition part. That is, ‘I’ might have been anything at all, and we choose this of the universe of possibilities.
How does this help anything? Well, it turns the integration-by-parts problem into this equation:
And we want to know what ‘I’ equals. And now suddenly it’s easier to see that we don’t actually have to do any calculus from here on out. We can solve it the way we’d solve any problem in high school algebra, which is, move ‘I’ to the other side. Formally, we add the same thing to the left- and the right-hand sides. That’s ‘I’ …
… and then divide both sides by the same number, 2 …
And now remember that substitution is a free action. We can do it whenever we like, and we can undo it whenever we like. This is a good time to undo it. Putting the whole expression back in for ‘I’ we get …
… which is the integral, evaluated.
(Someone would like to point out there should be a ‘plus C’ in there. This someone is right, for reasons that would take me too far afield to describe right now. We can overlook it for now anyway. I just want that someone to know I know what you’re thinking and you’re not catching me on this one.)
Sometimes, the integration by parts will need two or even three rounds before you get back the original integrand. This is because the instructor has chosen a particularly nasty problem for homework or the exam. It is not hopeless! But you will see strange constructs like 4/5 I equalling something. Carry on.
What makes this a bit of dark magic? I think it’s because of habits. We write down something simple on the left-hand side of an equation. We get an expression for what the right-hand side should be, and it’s usually complicated. And then we try making the right-hand side simpler and simpler. The left-hand side started simple so we never even touch it again. Indeed, working out something like this it’s common to write the left-hand side once, at the top of the page, and then never again. We just write an equals sign, underneath the previous line’s equals sign, and stuff on the right. We forget the left-hand side is there, and that we can do stuff with it and to it.
I think also we get into a habit of thinking an integral and integrand and all that is some quasi-magic mathematical construct. But it isn’t. It’s just a function. It may even be just a number. We don’t know what it is, but it will follow all the same rules of numbers, or functions. Moving it around may be more writing but it’s not different work to moving ‘4’ or ‘x2‘ around. That’s the value of replacing the integral with a symbol like ‘I’. It’s not that there’s something we can do with ‘I’ that we can’t do with ‘‘, other than write it in under four pen strokes. It’s that in algebra we learned the habit of moving a letter around to where it’s convenient. Moving a whole integral expression around seems different.
But it isn’t. It’s the same work done, just on a different kind of mathematics. I suspect finding out that it could be a trick that simple throws people off.
It’s been a normal cluster of mathematically-themed jokes this past week. But one of them lets me show off my ability to do introductory calculus.
Norm Feuti’s Gil for the 15th of March is a resisting-the-word-problems joke. It’s also a rerun, sad to say. King Features syndicated Feuti’s strip for a couple of years, but couldn’t make a go of it. GoComics.com is reprinting what ran and that’s something, at least.
Justin Boyd’s Invisible Bread for the 16th plays on alarm clocks that make you solve problems. I’ve heard of these things, and I suppose they exist or something. The idea is that making you do a bit of arithmetic proves you’ve gotten up enough to not fall right back asleep. The clockmakers are underestimating my ability to get back to sleep. Anyway, I like the escalation of this.
The integral that has to be solved, , is a good problem for people taking their first calculus course. Let me spoil it as a homework problem by saying how I’d solve it. If you haven’t got the first idea what calculus is about and don’t wish to know, go ahead and skip to the bit about Rudy Park. Or just enjoy the parts of the sentences below that aren’t mathematics.
The first thing I notice is the integrand, the thing inside the integral. That’s , which is the same as . Distributive law, as if you didn’t know. That strikes me as worth doing because, if the integral converges, the integral of the sum of two things is the same as the sum of the integral of two things. I’m willing to suppose it converges until given evidence otherwise. So this integral is the same as .
I think that’s worth doing because that first integral is incredibly easy. It’ll be a number equal to whatever -e-x is, when x is infinitely large, minus what -e-x is when x is zero. When x is infinitely large, -e-x is zero. When x is 0, -e-x is -1. So 0 minus -1 is … 1.
is harder. But it suggests how to evaluate it. The integrand is the quantity 2x times the quantity e-x. 2x is easy to take the derivative of. e-x is easy to integrate. (It’s also easy to take the derivative of, but it’s easier to integrate.) This suggests trying out integration by parts.
When you integrate by parts, you notice the original integral is the product of a part that’s easy to differentiate and a part that’s easy to integrate. My Intro Calculus textbooks generically label the easy-to-differentiate part u, and the easy-to-integrate part dv. Then the derivatie of the easy-to-differentiate part is du, and the integral of the easy-to-integrate part is v. When you integrate by parts, the integral of u times dv turns out to be equal to u times v (no integral signs there) minus the integral of v du. This may sound like we’ve just turned one integral into another. So we have. But we’ve often made it into an easier integral to evaluate. This is why we ever try it.
So if u equals 2x, then its derivative du is equal to 2 dx. If dv is equal to e-xdx (we want to carry those little d’s along), then v is equal to -e-x. And this means we have this:
That middle part, , is not an integral. It’s been integrated. The notation there means to evaluate the thing when x is infinitely large, and evaluate the thing when x is zero. Then subtract the x-is-zero value from the x-is-infinitely-large value. The x-is-zero value of this expression turns out to be zero, as you realize when you start writing “2 times 0 times oh wait we’re done here”. The x-is-infinitely-large value of this expression takes longer to get done. If you want to do it right you have to invoke l’Hôpital’s Rule. But it’s also zero.
The right-hand part, , is equal to , and that’s equal to . Which will be 0 minus a -2. Or 2 altogether.
So the integral is 1 plus 2, or in total, 3. The strip got its integration right.
Darrin Bell and Theron Heir’s Rudy Park for the 16th speaks of some architect who said the job didn’t demand being good at mathematics. I hadn’t heard the original claim and didn’t feel my constitution up to finding it. It was hard enough reading the comments at GoComics.com.
I’ve been on a bit of a logarithms kick lately, and I should say I’m not the only one. HowardAt58 has had a good number of articles about it, too, and I wanted to point some out to you. In this particular reblogging he brings a bit of calculus to show why the logarithm of the product of two numbere has to be the sum of the logarithms of the two separate numbers, in a way that’s more rigorous (if you’re comfortable with freshman calculus) than just writing down a couple examples along the lines of how 102 times 103 is equal to 105. (I won’t argue that having a couple specific examples might be better at communicating the point, but there’s a difference between believing something is so and being able to prove that it’s true.)
The derivative of the log function can be investigated informally, as log(x) is seen as the inverse of the exponential function, written here as exp(x). The exponential function appears naturally from numbers raised to varying powers, but formal definitions of the exponential function are difficult to achieve. For example, what exactly is the meaning of exp(pi) or exp(root(2)).
So we look at the log function:-
I have a guest post that I mean to put up shortly which is a spinoff of the talk last month about calculating logarithms. There are several ways to define a logarithm but one of the most popular is to define it as an integral. That has the advantages of allowing the logarithm to be studied using a lot of powerful analytic tools already built up for calculus, and allow it to be calculated numerically because there are a lot of methods for calculating logarithms out there. I wanted to precede that post with a discussion of a couple of the ways to do these numerical integrations.
A great way to interpret integrating a function is to imagine drawing a plot of function; the integral is the net area between the x-axis and the plot of that function. That may be pretty hard to do, though, so we fall back on a standard mathematician’s trick that they never tell you about in grade school, probably for good reason: don’t bother doing the problem you actually have, and instead do a problem that looks kind of like it but that you are able to do.
Normally, for what’s called a definite integral, we’re interested in the area underneath a curve and across an “interval”, that is, between some minimum and some maximum value on the x-axis. Definite integrals are the kind we can approximate numerically. An indefinite integral gives a function that would tell us what the definite integral on any interval would be, but that takes symbolic mathematics to work out and that’s way beyond this article’s scope.
While we may have no idea what the area underneath a complicated squiggle on some interval is, we do know what the area inside a rectangle is. So if we pretend we’re interested in the area of the rectangle instead of the original area, good. Take my little drawing of a generic function here, the wavey red curve. The integral of it from wherever that left vertical green line is to the right is the area between the x-axis, the horizontal black line, and the red curve.
If we use the “Rectangle Rule”, we draw a horizontal line based on the value of the function somewhere from the left line to the right. The yellow line up top is based on the value at the left endpoint. The blue line is based on the value the function has at the right endpoint. We can use any point, although the most popular ones are the left endpoint, the right endpoint, and the midpoint, because those are nice, easy picks to make. (And if we’re trying to integrate a function whose definition we don’t know, for example because it’s the data we got from an experiment, these will often be the only data points we have.) The area under the curve is going to be something like the area of the rectangle bounded by the green lines, the horizontal black line, and the blue horizontal line or the yellow horizontal line.
Drawn this way you might complain this approximation is rubbish: the area of the blue-topped rectangle is obviously way too low, and that of the yellow-topped rectangle is way too high. The mathematician’s answer to this is: oh, hush. We were looking for easy, not good. The area is the width of the interval times the value of the function at the chosen point; how much easier can you get?
(It also happens that the blue rectangle obviously gives too low an area, while the yellow gives too high an area. This is a coincidence, caused by my not thinking to make my function wiggle up and down quite enough. Generally speaking neither the left- nor the right-endpoints are maximum or minimum values for the function. It can be useful analytically to select the points that are “where the function is its highest” and “where the function is its lowest” — this lets you find the upper and lower bounds for the area — but that’s generally too hard to use computationally.)
But we can turn into a good approximation. What makes the blue or the yellow lines lousy approximations is that the function changes a lot in the distance between the green lines. If we were to chop up the strip into a bunch of smaller ones, and use the rectangle rule on each of those pieces, the function would change less in each of those smaller pieces, and so we’d get an area total that’s closer to the actual area. We find the distance between a pair of adjacent vertical green lines, multiply that by the height of the function at the chosen point, and add that to the running total. This is properly called the “Composite Rectangle Rule”, although it’s really only textbooks introducing the idea that make a fuss about including the “composite”. It just makes so much sense to break the interval up that we do that all the time and forget to explicitly say that except in the class where we introduce this.
(And, notice, in my drawings that in some of the regions behind vertical green lines the left-endpoint and the right-endpoint are not where the function gets its highest, or lowest, value. They can just be points.)
There’s nothing special about the Rectangle Rule that makes it uniquely suited for composition. It’s just easier to draw that way. Any numerical integration rule lets you do the same trick. Also, it’s very common to make all the smaller rectangles — called the subintervals — the same width, but that’s not because the method needs that to work. It’s easier to calculate if all the subintervals are the same width, because then you don’t have to remember how wide each different subinterval is.
Rectangles are probably the easiest shape of all to deal with, but they’re not the only easy shapes. Trapezoids, or trapeziums if you prefer, are hardly a challenge to find the area for. This gives me the next really popular integration rule, the “Trapezoid Rule” or “Trapezium Rule” as your dialect favors. We take the function and approximate its area by working out the area of the trapezoid formed by the left green edge, the bottom black edge, the right green edge, and the sloping blue line that goes from where the red function touches the left end to where the function touches the right end. This is a little harder than the Rectangle Rule: we have to multiply the width of the interval between the green lines by the arithmetic mean of the function’s value at the left and at the right endpoints. That means, evaluate the function at the left endpoint and at the right endpoint, add those two values together, and divide by two. Not much harder and it’s pleasantly more accurate than the Rectangle Rule.
If that’s not good enough for you, you can break the interval up into a bunch of subintervals, just as with the Composite Rectangle Rule, and find the areas of all the trapezoids created there. This is properly called the “Composite Trapezoid Rule”, but again, after your Numerical Methods I class you won’t see the word “composite” prefixed to the name again.
And yet we can do better still. We’ll remember this when we pause a moment and think about what we’re really trying to do. When we do a numerical integration like this we want to find, instead of the area underneath our original curve, the area underneath a curve that looks like it but that’s easier to deal with. (Yes, we’re calling the straight lines of the Rectangle and Trapezoid Rules “curves”. Hush.) We can use any curve that we know how to deal with. Parabolas — the curving arc that you see if, say, you shoot the water from a garden hose into the air — may not seem terribly easy to deal with, but it turns out it’s not hard to figure out the area underneath a slice of one of them. This gives us the integration technique called “Simpson’s Rule”.
The Simpson here is Thomas Simpson, 1710 – 1761, who in accord with Mathematics Law did not discover or invent the rule named for him. Johannes Kepler knew the rule a century before Simpson got into the game, at minimum, and both Galileo’s student Bonaventura Cavalieri (who introduced logarithms to Italy, and was one of those people creeping up on infinitesimal calculus ahead of Newton) and the English mathematician/physicist James Gregory (who discovered diffraction grating, and seems to be the first person to have published a proof of the Fundamental Theorem of Calculus) were in on it too. But Simpson wrote a number of long-lived textbooks about calculus, which probably is why his name got tied to this method.
In Simpson’s Rule, you need the value of the function at the left endpoint, the midpoint, and the right endpoint of the interval. You can draw the parabola which connects those points — it’s the blue curve in my drawing — and find the area underneath that parabola. The formula may sound a little funny but it isn’t hard: the area underneath the parabola is one-third the width of the interval times the sum of the value at the left endpoint, the value at the right endpoint, and four times the value at the midpoint. It’s a bit more work but it’s a lot more accurate than the Trapezoid Rule.
There are literally infinitely many more rules you could use, with such names as “Simpson’s Three-Eighths Rule” (also called “Simpson’s Second Rule”) or “Boole’s Rule”, but they’re based on similar tricks of making a function that looks like the one you’re interested in but whose area you know how to calculate exactly. For the Simpson’s Three-Eighth Rule, for example, you make a cubic polynomial instead of a parabola. If you’re good at finding the areas underneath wedges of circles or underneath hyperbolas or underneath sinusoidal functions, go ahead and use those. You can find the balance of ease of use and accuracy of result that you like.
: Boole’s Rule is also known as Bode’s Rule, because of a typo in the 1972 edition of the Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, or as everyone ever has always referred to this definitive mathematical reference, Abromowitz and Stegun. (Milton Abromowitz and Irene Stegun were the reference’s editors.)
Sometime in late August or early September 1994 I had one of those quietly astounding moments on a computer. It would have been while using Maple, a program capable of doing symbolic mathematics. It was something capable not just working out what the product of two numbers is, but of holding representations of functions and working out what the product of those functions was. That’s striking enough, but more was to come: I could describe a function and have Maple do the work of symbolically integrating it. That was astounding then, and it really ought to be yet. Let me explain why.
It’s fairly easy to think of symbolic representations of functions: if f(x) equals , well, you know if I give you some value for x, you can give me back an f(x), and if you’re a little better you can describe, roughly, a plot of x versus f(x). That is, that’s the plot of all the points on the plane for which the value of the x-coordinate and the value of the y-coordinate make the statement “” a true statement.
If you’ve gotten into calculus, though, you’d like to know other things: the derivative, for example, of f(x). That is (among other interpretations), if I give you some value for x, you can tell me how quickly f(x) is changing at that x. Working out the derivative of a function is a bit of work, but it’s not all that hard; there’s maybe a half-dozen or so rules you have to follow, plus some basic cases where you learn what the derivative of x to a power is, or what the derivative of the sine is, or so on. (You don’t really need to learn those basic cases, but it saves you a lot of work if you do.) It takes some time to learn them, and what order to apply them in, but once you do it’s almost automatic. If you’re smart you might do some problems better, but, you don’t have to be smart, just indefatigable.
Integrating a function (among other interpretations, that’s finding the amount of area underneath a curve) is different, though, even though it’s kind of an inverse of finding the derivative. If you integrate a function, and then take its derivative, you get back the original function, unless you did it wrong. (For various reasons if you take a derivative and then integrate you won’t necessarily get back the original function, but you’ll get something close to it.) However, that integration is still really, really hard. There are rules to follow, yes, but despite that it’s not necessarily obvious what to do, or why to do it, and even if you do know the various rules and use them perfectly you’re not necessarily guaranteed to get an answer. Being indefatigable might help, but you also need to be smart.
So, it’s easy to imagine writing a computer program that can find a derivative; to find an integral, though? That’s amazing, and still is amazing. And that brings me at last to this tweet from @mathematicsprof:
Wonder how Wolframalpha finds all those nasty anti-derivatives? Symbolic integration, Louiville to Risch -> http://t.co/vpirsn2g1E
The document linked to by this is a master’s thesis, titled Symbolic Integration, prepared by one Björn Terelius for the Royal Institute of Technology in Stockholm. It’s a fair-sized document, but it does open with a history of computers that work out integrals that anyone ought to be able to follow. It goes on to describe the logic behind algorithms that do this sor of calculation, though, and should be quite helpful in understanding just how it is the computer does this amazing thing.
(For a bonus, it also contains a short proof of why you can’t integrate , one of those functions that looks nice and easy and that drives you crazy in Calculus II when you give it your best try.)
I guess this is a good time to give my answer for the challenge of how many different trapezoids there are to draw. At the least it’ll provide an answer to people who seek on Google the answer to how many trapezoids there are to draw. In principle there’s an infinite number that can be drawn, of course, but I wanted to cut down the ways that seem to multiply cases without really being different shapes. For example, rotating a trapezoid doesn’t make it new, and just stretching it out longer in one direction or another shouldn’t. And just enlarging or shrinking the whole thing doesn’t change it. So given that, how many kinds of trapezoids do I see?