My 2019 Mathematics A To Z: Fourier series


Today’s A To Z term came to me from two nominators. One was @aajohannas, again offering a great topic. Another was Mr Wu, author of the Singapore Maths Tuition blog. I hope neither’s disappointed here.

Fourier series are named for Jean-Baptiste Joseph Fourier, and are maybe the greatest example of the theory that’s brilliantly wrong. Anyone can be wrong about something. There’s genius in being wrong in a way that gives us good new insights into things. Fourier series were developed to understand how the fluid we call “heat” flows through and between objects. Heat is not a fluid. So what? Pretending it’s a fluid gives us good, accurate results. More, you don’t need to use Fourier series to work with a fluid. Or a thing you’re pretending is a fluid. It works for lots of stuff. The Fourier series method challenged assumptions mathematicians had made about how functions worked, how continuity worked, how differential equations worked. These problems could be sorted out. It took a lot of work. It challenged and expended our ideas of functions.

Fourier also managed to hold political offices in France during the Revolution, the Consulate, the Empire, the Bourbon Restoration, the Hundred Days, and the Second Bourbon Restoration without getting killed for his efforts. If nothing else this shows the depth of his talents.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Fourier series.

So, how do you solve differential equations? As long as they’re linear? There’s usually something we can do. This is one approach. It works well. It has a bit of a weird setup.

The weirdness of the setup: you want to think of functions as points in space. The allegory is rather close. Think of the common association between a point in space and the coordinates that describe that point. Pretend those are the same thing. Then you can do stuff like add points together. That is, take the coordinates of both points. Add the corresponding coordinates together. Match that sum-of-coordinates to a point. This gives us the “sum” of two points. You can subtract points from one another, again by going through their coordinates. Multiply a point by a constant and get a new point. Find the angle between two points. (This is the angle formed by the line segments connecting the origin and both points.)

Functions can work like this. You can add functions together and get a new function. Subtract one function from another. Multiply a function by a constant. It’s even possible to describe an “angle” between two functions. Mathematicians usually call that the dot product or the inner product. But we will sometimes call two functions “orthogonal”. That means the ordinary everyday meaning of “orthogonal”, if anyone said “orthogonal” in ordinary everyday life.

We can take equations of a bunch of variables and solve them. Call the values of that solution the coordinates of a point. Then we talk about finding the point where something interesting happens. Or the points where something interesting happens. We can do the same with differential equations. This is finding a point in the space of functions that makes the equation true. Maybe a set of points. So we can find a function or a family of functions solving the differential equation.

You have reasons for skepticism, even if you’ll grant me treating functions as being like points in space. You might remember solving systems of equations. You need as many equations as there are dimensions of space; a two-dimensional space needs two equations. A three-dimensional space needs three equations. You might have worked four equations in four variables. You were threatened with five equations in five variables if you didn’t all settle down. You’re not sure how many dimensions of space “all the possible functions” are. It’s got to be more than the one differential equation we started with.

This is fair. The approach I’m talking about uses the original differential equation, yes. But it breaks it up into a bunch of linear equations. Enough linear equations to match the space of functions. We turn a differential equation into a set of linear equations, a matrix problem, like we know how to solve. So that settles that.

So suppose f(x) solves the differential equation. Here I’m going to pretend that the function has one independent variable. Many functions have more than this. Doesn’t matter. Everything I say here extends into two or three or more independent variables. It takes longer and uses more symbols and we don’t need that. The thing about f(x) is that we don’t know what it is, but would quite like to.

What we’re going to do is choose a reference set of functions that we do know. Let me call them g_0(x), g_1(x), g_2(x), g_3(x), \cdots going on to however many we need. It can be infinitely many. It certainly is at least up to some g_N(x) for some big enough whole number N. These are a set of “basis functions”. For any function we want to represent we can find a bunch of constants, called coefficients. Let me use a_0, a_1, a_2, a_3, \cdots to represent them. Any function we want is the sum of the coefficient times the matching basis function. That is, there’s some coefficients so that

f(x) = a_0\cdot g_0(x) + a_1\cdot g_1(x) + a_2\cdot g_2(x) + a_3\cdot g_3(x) + \cdots

is true. That summation goes on until we run out of basis functions. Or it runs on forever. This is a great way to solve linear differential equations. This is because we know the basis functions. We know everything we care to know about them. We know their derivatives. We know everything on the right-hand side except the coefficients. The coefficients matching any particular function are constants. So the derivatives of f(x) , written as the sum of coefficients times basis functions, are easy to work with. If we need second or third or more derivatives? That’s no harder to work with.

You may know something about matrix equations. That is that solving them takes freaking forever. The bigger the equation, the more forever. If you have to solve eight equations in eight unknowns? If you start now, you might finish in your lifetime. For this function space? We need dozens, hundreds, maybe thousands of equations and as many unknowns. Maybe infinitely many. So we seem to have a solution that’s great apart from how we can’t use it.

Except. What if the equations we have to solve are all easy? If we have to solve a bunch that looks like, oh, 2a_0 = 4 and 3a_1 = -9 and 2a_2 = 10 … well, that’ll take some time, yes. But not forever. Great idea. Is there any way to guarantee that?

It’s in the basis functions. If we pick functions that are orthogonal, or are almost orthogonal, to each other? Then we can turn the differential equation into an easy matrix problem. Not as easy as in the last paragraph. But still, not hard.

So what’s a good set of basis functions?

And here, about 800 words later than everyone was expecting, let me introduce the sine and cosine functions. Sines and cosines make great basis functions. They don’t grow without bounds. They don’t dwindle to nothing. They’re easy to differentiate. They’re easy to integrate, which is really special. Most functions are hard to integrate. We even know what they look like. They’re waves. Some have long wavelengths, some short wavelengths. But waves. And … well, it’s easy to make sets of them orthogonal.

We have to set some rules. The first is that each of these sine and cosine basis functions have a period. That is, after some time (or distance), they repeat. They might repeat before that. Most of them do, in fact. But we’re guaranteed a repeat after no longer than some period. Call that period ‘L’.

Each of these sine and cosine basis functions has to have a whole number of complete oscillations within the period L. So we can say something about the sine and cosine functions. They have to look like these:

s_j(x) = \sin\left(\frac{2\pi j}{L} x\right)

c_k(x) = \cos\left(\frac{2\pi k}{L} x\right)

Here ‘j’ and ‘k’ are some whole numbers. I have two sets of basis functions at work here. Don’t let that throw you. We could have labelled them all as g_k(x) , with some clever scheme that told us for a given k whether it represents a sine or a cosine. It’s less hard work if we have s’s and c’s. And if we have coefficients of both a’s and b’s. That is, we suppose the function f(x) is:

f(x) = \frac{1}{2}a_0 + b_1 s_1(x) + a_1 c_1(x) + b_2 s_2(x) + a_2 s_2(x) + b_3 s_3(x) + a_3 c_3(x) + \cdots

This, at last, is the Fourier series. Each function has its own series. A “series” is a summation. It can be of finitely many terms. It can be of infinitely many. Often infinitely many terms give more interesting stuff. Like this, for example. Oh, and there’s a bare \frac{1}{2}a_0 there, not multiplied by anything more complicated. It makes life easier. It lets us see that the Fourier series for, like, 3 + f(x) is the same as the Fourier series for f(x), except for the leading term. The ½ before that makes easier some work that’s outside the scope of this essay. Accept it as one of the merry, wondrous appearances of ‘2’ in mathematics expressions.

It’s great for solving differential equations. It’s also great for encryption. The sines and the cosines are standard functions, after all. We can send all the information we need to reconstruct a function by sending the coefficients for it. This can also help us pick out signal from noise. Noise has a Fourier series that looks a particular way. If you take the coefficients for a noisy signal and remove that? You can get a good approximation of the original, noiseless, signal.

This all seems great. That’s a good time to feel skeptical. First, like, not everything we want to work with looks like waves. Suppose we need a function that looks like a parabola. It’s silly to think we can add a bunch of sines and cosines and get a parabola. Like, a parabola isn’t periodic, to start with.

So it’s not. To use Fourier series methods on something that’s not periodic, we use a clever technique: we tell a fib. We declare that the period is something bigger than we care about. Say the period is, oh, ten million years long. A hundred light-years wide. Whatever. We trust that the difference between the function we do want, and the function that we calculate, will be small. We trust that if someone ten million years from now and a hundred light-years away wishes to complain about our work, we will be out of the office that day. Letting the period L be big enough is a good reliable tool.

The other thing? Can we approximate any function as a Fourier series? Like, at least chunks of parabolas? Polynomials? Chunks of exponential growths or decays? What about sawtooth functions, that rise and fall? What about step functions, that are constant for a while and then jump up or down?

The answer to all these questions is “yes,” although drawing out the word and raising a finger to say there are some issues we have to deal with. One issue is that most of the time, we need an infinitely long series to represent a function perfectly. This is fine if we’re trying to prove things about functions in general rather than solve some specific problem. It’s no harder to write the sum of infinitely many terms than the sum of finitely many terms. You write an ∞ symbol instead of an N in some important places. But if we want to solve specific problems? We probably want to deal with finitely many terms. (I hedge that statement on purpose. Sometimes it turns out we can find a formula for all the infinitely many coefficients.) This will usually give us an approximation of the f(x) we want. The approximation can be as good as we want, but to get a better approximation we need more terms. Fair enough. This kind of tradeoff doesn’t seem too weird.

Another issue is in discontinuities. If f(x) jumps around? If it has some point where it’s undefined? If it has corners? Then the Fourier series has problems. Summing up sines and cosines can’t give us a sudden jump or a gap or anything. Near a discontinuity, the Fourier series will get this high-frequency wobble. A bigger jump, a bigger wobble. You may not blame the series for not representing a discontinuity. But it does mean that what is, otherwise, a pretty good match for the f(x) you want gets this region where it stops being so good a match.

That’s all right. These issues aren’t bad enough, or unpredictable enough, to keep Fourier series from being powerful tools. Even when we find problems for which sines and cosines are poor fits, we use this same approach. Describe a function we would like to know as the sums of functions we choose to work with. Fourier series are one of those ideas that helps us solve problems, and guides us to new ways to solve problems.


This is my last big essay for the week. All of Fall 2019 A To Z posts should be at this link. The letter G should get its chance on Tuesday and H next Thursday. I intend to have A To Z essays should be available at this link. If you’d like to nominate topics for essays, I’m asking for the letters I through N at this link. Thank you.

Reading the Comics, May 23, 2018: Nice Warm Gymnasium Edition


I haven’t got any good ideas for the title for this collection of mathematically-themed comic strips. But I was reading the Complete Peanuts for 1999-2000 and just ran across one where Rerun talked about consoling his basketball by bringing it to a nice warm gymnasium somewhere. So that’s where that pile of words came from.

Mark Anderson’s Andertoons for the 21st is the Mark Anderson’s Andertoons for this installment. It has Wavehead suggest a name for the subtraction of fractions. It’s not by itself an absurd idea. Many mathematical operations get specialized names, even though we see them as specific cases of some more general operation. This may reflect the accidents of history. We have different names for addition and subtraction, though we eventually come to see them as the same operation.

On the board, 3/5 - 1/4. Wavehead, to teacher: 'You should call it sub-*fraction*. You can use that --- that's a freebie.'
Mark Anderson’s Andertoons for the 21st of May, 2018. I’m not sure the girl in class needs to be quite so horrified by this suggestion. On the other hand, she sees a lot of this kind of stuff in class.

In calculus we get introduced to Maclaurin Series. These are polynomials that approximate more complicated functions. They’re the best possible approximations for a region around 0 in the domain. They’re special cases of the Taylor Series. Those are polynomials that approximate more complicated functions. But you get to pick where in the domain they should be the best approximation. Maclaurin series are nothing but a Taylor series; we keep the names separate anyway, for the reasons. And slightly baffling ones; James Gregory and Brook Taylor studied Taylor series before Colin Maclaurin did Maclaurin series. But at least Taylor worked on Taylor series, and Maclaurin on Macularin series. So for a wonder mathematicians named these things for appropriate people. (Ignoring that Indian mathematicians were poking around this territory centuries before the Europeans were. I don’t know whether English mathematicians of the 18th century could be expected to know of Indian work in the field, in fairness.)

In numerical calculus, we have a scheme for approximating integrals known as the trapezoid rule. It approximates the areas under curves by approximating a curve as a trapezoid. (Any questions?) But this is one of the Runge-Kutta methods. Nobody calls it that except to show they know neat stuff about Runge-Kutta methods. The special names serve to pick out particularly interesting or useful cases of a more generally used thing. Wavehead’s coinage probably won’t go anywhere, but it doesn’t hurt to ask.

Skippy: 'Look at 'im. The meanest kid on the block. He's got a grudge on the school teacher 'cause she made him stop copyin' answers out of his arithmetic. So he tore out the front of the book an' says 'What good is it without the last part?'
Percy Crosby’s Skippy for the 22nd of May, 2018. It was originally run, looks like, the 12th of February, 1931.

Percy Crosby’s Skippy for the 22nd I admit I don’t quite understand. It mentions arithmetic anyway. I think it’s a joke about a textbook like this being good only if it’s got the questions and the answers. But it’s the rare Skippy that’s as baffling to me as most circa-1930 humor comics are.

Lecturer presenting a blackboard full of equations, titled, 'Mathematical Proof that God does not exist'. In the audience is God.
Ham’s Life on Earth for the 23rd of May, 2018. How did the lecturer get stuff on the top of the board there?

Ham’s Life on Earth for the 23rd presents the blackboard full of symbols as an attempt to prove something challenging. In this case, to say something about the existence of God. It’s tempting to suppose that we could say something about the existence or nonexistence of God using nothing but logic. And there are mathematics fields that are very close to pure logic. But our scary friends in the philosophy department have been working on the ontological argument for a long while. They’ve found a lot of arguments that seem good, and that fall short for reasons that seem good. I’ll defer to their experience, and suppose that any mathematics-based proof to have the same problems.

Paige: 'I keep forgetting ... what's the cosine of 60 degrees?' Jason: 'Well, let's see. If I recall correctly ... 1 - (pi/3)^2/2! + (pi/3)^4/4! - (pi/3)^6/6! + (pi/3)^8/8! - (pi/3)^10/10! + (pi/3)^12/12! - (and this goes on a while, up to (pi/3)^32/32! - ... )' Paige: 'In case you've forgotten, I'm not paying you by the hour.' Jason: '1/2'.
Bill Amend’s FoxTrot Classics for the 23rd of May, 2018. It originally ran the 29th of May, 1996.

Bill Amend’s FoxTrot Classics for the 23rd deploys a Maclaurin series. If you want to calculate the cosine of an angle, and you know the angle in radians, you can find the value by adding up the terms in an infinitely long series. So if θ is the angle, measured in radians, then its cosine will be:

\cos\left(\theta\right) = \sum_{k = 0}^{\infty} \left(-1\right)^k \frac{\theta^k}{k!}

60 degrees is \frac{\pi}{3} in radians and you see from the comic how to turn this series into a thing to calculate. The series does, yes, go on forever. But since the terms alternate in sign — positive then negative then positive then negative — you have a break. Suppose all you want is the answer to within an error margin. Then you can stop adding up terms once you’ve gotten to a term that’s smaller than your error margin. So if you want the answer to within, say, 0.001, you can stop as soon as you find a term with absolute value less than 0.001.

For high school trig, though, this is all overkill. There’s five really interesting angles you’d be expected to know anything about. They’re 0, 30, 45, 60, and 90 degrees. And you need to know about reflections of those across the horizontal and vertical axes. Those give you, like, -30 degrees or 135 degrees. Those reflections don’t change the magnitude of the cosines or sines. They might change the plus-or-minus sign is all. And there’s only three pairs of numbers that turn up for these five interesting angles. There’s 0 and 1. There’s \frac{1}{2} and \frac{\sqrt{3}}{2} . There’s \frac{1}{\sqrt{2}} and \frac{1}{\sqrt{2}} . Three things to memorize, plus a bit of orienteering, to know whether the cosine or the sine should be the larger size and whether they should positive or negative. And then you’ve got them all.

You might get asked for, like, the sine of 15 degrees. But that’s someone testing whether you know the angle-addition or angle-subtraction formulas. Or the half-angle and double-angle formulas. Nobody would expect you to know the cosine of 15 degrees. The cosine of 30 degrees, though? Sure. It’s \frac{\sqrt{3}}{2} .

Michael: 'It's near the end of the school year. You should ease up on the homework. I've learned more than enough this year.' Teacher: 'Oh, sure. How does a 50-percent cut sound?' Michael: 'Why cut it by just one-third?' Teacher: 'You're not helping your case.'
Mike Thompson’s Grand Avenue for the 23rd of May, 2018. I don’t know why the kid and the teacher are dressed the same. I’m honestly not sure if they’re related.

Mike Thompson’s Grand Avenue for the 23rd is your basic confused-student joke. People often have trouble going from percentages to decimals to fractions and back again. Me, I have trouble in going from percentage chances to odds, as in, “two to one odds” or something like that. (Well, “one to one odds” I feel confident in, and “two to one” also. But, say, “seven to five odds” I can’t feel sure I understand, other than that the second choice is a perceived to be a bit more likely than the first.)

… You know, this would have parsed as the Maclaurin Series Edition, wouldn’t it? Well, if only I were able to throw away words I’ve already written and replace them with better words before publishing, huh?

Reading the Comics, October 19, 2016: An Extra Day Edition


I didn’t make noise about it, but last Sunday’s mathematics comic strip roundup was short one day. I was away from home and normal computer stuff Saturday. So I posted without that day’s strips under review. There was just the one, anyway.

Also I want to remind folks I’m doing another Mathematics A To Z, and taking requests for words to explain. There are many appealing letters still unclaimed, including ‘A’, ‘T’, and ‘O’. Please put requests in over on that page because. It’s easier for me to keep track of what’s been claimed that way.

Matt Janz’s Out of the Gene Pool rerun for the 15th missed last week’s cut. It does mention the Law of Cosines, which is what the Pythagorean Theorem looks like if you don’t have a right triangle. You still have to have a triangle. Bobby-Sue recites the formula correctly, if you know the notation. The formula’s c^2 = a^2 + b^2 - 2 a b \cos\left(C\right) . Here ‘a’ and ‘b’ and ‘c’ are the lengths of legs of the triangle. ‘C’, the capital letter, is the size of the angle opposite the leg with length ‘c’. That’s a common notation. ‘A’ would be the size of the angle opposite the leg with length ‘a’. ‘B’ is the size of the angle opposite the leg with length ‘b’. The Law of Cosines is a generalization of the Pythagorean Theorem. It’s a result that tells us something like the original theorem but for cases the original theorem can’t cover. And if it happens to be a right triangle the Law of Cosines gives us back the original Pythagorean Theorem. In a right triangle C is the size of a right angle, and the cosine of that is 0.

That said Bobby-Sue is being fussy about the drawings. No geometrical drawing is ever perfectly right. The universe isn’t precise enough to let us draw a right triangle. Come to it we can’t even draw a triangle, not really. We’re meant to use these drawings to help us imagine the true, Platonic ideal, figure. We don’t always get there. Mock proofs, the kind of geometric puzzle showing something we know to be nonsense, rely on that. Give chalkboard art a break.

Samson’s Dark Side of the Horse for the 17th is the return of Horace-counting-sheep jokes. So we get a π joke. I’m amused, although I couldn’t sleep trying to remember digits of π out quite that far. I do better working out Collatz sequences.

Hilary Price’s Rhymes With Orange for the 19th at least shows the attempt to relieve mathematics anxiety. I’m sympathetic. It does seem like there should be ways to relieve this (or any other) anxiety, but finding which ones work, and which ones work best, is partly a mathematical problem. As often happens with Price’s comics I’m particularly tickled by the gag in the title panel.

The Help Session ('Be sure to show your work'). 'It's simple --- if 3 deep breaths take 4.2 seconds, and your dread to confidence ratio is 2:1, how long will it take to alleviate your math anxiety?'
Hilary Price’s Rhymes With Orange for the 19th of October, 2016. I don’t think there’s enough data given to solve the problem. But it’s a start at least. Start by making a note of it on your suspiciously large sheet of paper.

Norm Feuti’s Gil rerun for the 19th builds on the idea calculators are inherently cheating on arithmetic homework. I’m sympathetic to both sides here. If Gil just wants to know that his answers are right there’s not much reason not to use a calculator. But if Gil wants to know that he followed the right process then the calculator’s useless. By the right process I mean, well, the work to be done. Did he start out trying to calculate the right thing? Did he pick an appropriate process? Did he carry out all the steps in that process correctly? If he made mistakes on any of those he probably didn’t get to the right answer, but it’s not impossible that he would. Sometimes multiple errors conspire and cancel one another out. That may not hurt you with any one answer, but it does mean you aren’t doing the problem right and a future problem might not be so lucky.

Zach Weinersmith’s Saturday Morning Breakfast Cereal rerun for the 19th has God crashing a mathematics course to proclaim there’s a largest number. We can suppose there is such a thing. That’s how arithmetic modulo a number is done, for one. It can produce weird results in which stuff we just naturally rely on doesn’t work anymore. For example, in ordinary arithmetic we know that if one number times another equals zero, then either the first number or the second, or both, were zero. We use this in solving polynomials all the time. But in arithmetic modulo 8 (say), 4 times 2 is equal to 0.

And if we recklessly talk about “infinity” as a number then we get outright crazy results, some of them teased in Weinersmith’s comic. “Infinity plus one”, for example, is “infinity”. So is “infinity minus one”. If we do it right, “infinity minus infinity” is “infinity”, or maybe zero, or really any number you want. We can avoid these logical disasters — so far, anyway — by being careful. We have to understand that “infinity” is not a number, though we can use numbers growing infinitely large.

Induction, meanwhile, is a great, powerful, yet baffling form of proof. When it solves a problem it solves it beautifully. And easily, too, usually by doing something like testing two special cases. Maybe three. At least a couple special cases of whatever you want to know. But picking the cases, and setting them up so that the proof is valid, is not easy. There’s logical pitfalls and it is so hard to learn how to avoid them.

Jon Rosenberg’s Scenes from a Multiverse for the 19th plays on a wonderful paradox of randomness. Randomness is … well, unpredictable. If I tried to sell you a sequence of random numbers and they were ‘1, 2, 3, 4, 5, 6, 7’ you’d be suspicious at least. And yet, perfect randomness will sometimes produce patterns. If there were no little patches of order we’d have reason to suspect the randomness was faked. There is no reason that a message like “this monkey evolved naturally” couldn’t be encoded into a genome by chance. It may just be so unlikely we don’t buy it. The longer the patch of order the less likely it is. And yet, incredibly unlikely things do happen. The study of impossibly unlikely events is a good way to quickly break your brain, in case you need one.

%d bloggers like this: