The End 2016 Mathematics A To Z: The Fredholm Alternative


Some things are created with magnificent names. My essay today is about one of them. It’s one of my favorite terms and I get a strange little delight whenever it needs to be mentioned in a proof. It’s also the title I shall use for my 1970s Paranoid-Conspiracy Thriller.

The Fredholm Alternative.

So the Fredholm Alternative is about whether this supercomputer with the ability to monitor every commercial transaction in the country falls into the hands of the Parallax Corporation or whether — ahm. Sorry. Wrong one. OK.

The Fredholm Alternative comes from the world of functional analysis. In functional analysis we study sets of functions with tools from elsewhere in mathematics. Some you’d be surprised aren’t already in there. There’s adding functions together, multiplying them, the stuff of arithmetic. Some might be a bit surprising, like the stuff we draw from linear algebra. That’s ideas like functions having length, or being at angles to each other. Or that length and those angles changing when we take a function of those functions. This may sound baffling. But a mathematics student who’s got into functional analysis usually has a happy surprise waiting. She discovers the subject is easy. At least, it relies on a lot of stuff she’s learned already, applied to stuff that’s less difficult to work with than, like, numbers.

(This may be a personal bias. I found functional analysis a thoroughgoing delight, even though I didn’t specialize in it. But I got the impression from other grad students that functional analysis was well-liked. Maybe we just got the right instructor for it.)

I’ve mentioned in passing “operators”. These are functions that have a domain that’s a set of functions and a range that’s another set of functions. Suppose you come up to me with some function, let’s say f(x) = x^2 . I give you back some other function — say, F(x) = \frac{1}{3}x^3 - 4 . Then I’m acting as an operator.

Why should I do such a thing? Many operators correspond to doing interesting stuff. Taking derivatives of functions, for example. Or undoing the work of taking a derivative. Describing how changing a condition changes what sorts of outcomes a process has. We do a lot of stuff with these. Trust me.

Let me use the name `T’ for some operator. I’m not going to say anything about what it does. The letter’s arbitrary. We like to use capital letters for operators because it makes the operators look extra important. And we don’t want to use `O’ because that just looks like zero and we don’t need that confusion.

Anyway. We need two functions. One of them will be called ‘f’ because we always call functions ‘f’. The other we’ll call ‘v’. In setting up the Fredholm Alternative we have this important thing: we know what ‘f’ is. We don’t know what ‘v’ is. We’re finding out something about what ‘v’ might be. The operator doing whatever it does to a function we write down as if it were multiplication, that is, like ‘Tv’. We get this notation from linear algebra. There we multiple matrices by vectors. Matrix-times-vector multiplication works like operator-on-a-function stuff. So much so that if we didn’t use the same notation young mathematics grad students would rise in rebellion. “This is absurd,” they would say, in unison. “The connotations of these processes are too alike not to use the same notation!” And the department chair would admit they have a point. So we write ‘Tv’.

If you skipped out on mathematics after high school you might guess we’d write ‘T(v)’ and that would make sense too. And, actually, we do sometimes. But by the time we’re doing a lot of functional analysis we don’t need the parentheses so much. They don’t clarify anything we’re confused about, and they require all the work of parenthesis-making. But I do see it sometimes, mostly in older books. This makes me think mathematicians started out with ‘T(v)’ and then wrote less as people got used to what they were doing.

I admit we might not literally know what ‘f’ is. I mean we know what ‘f’ is in the same way that, for a quadratic equation, “ax2 + bx + c = 0”, we “know” what ‘a’, ‘b’, and ‘c’ are. Similarly we don’t know what ‘v’ is in the same way we don’t know what ‘x’ there is. The Fredholm Alternative tells us exactly one of these two things has to be true:

For operators that meet some requirements I don’t feel like getting into, either:

  1. There’s one and only one ‘v’ which makes the equation Tv  = f true.
  2. Or else Tv = 0 for some ‘v’ that isn’t just zero everywhere.

That is, either there’s exactly one solution, or else there’s no solving this particular equation. We can rule out there being two solutions (the way quadratic equations often have), or ten solutions (the way some annoying problems will), or infinitely many solutions (oh, it happens).

It turns up often in boundary value problems. Often before we try solving one we spend some time working out whether there is a solution. You can imagine why it’s worth spending a little time working that out before committing to a big equation-solving project. But it comes up elsewhere. Very often we have problems that, at their core, are “does this operator match anything at all in the domain to a particular function in the range?” When we try to answer we stumble across Fredholm’s Alternative over and over.

Fredholm here was Ivar Fredholm, a Swedish mathematician of the late 19th and early 20th centuries. He worked for Uppsala University, and for the Swedish Social Insurance Agency, and as an actuary for the Skandia insurance company. Wikipedia tells me that his mathematical work was used to calculate buyback prices. I have no idea how.

The End 2016 Mathematics A To Z: Boundary Value Problems


I went to a grad school, Rensselaer Polytechnic Institute. The joke at the school is that the mathematics department has two tracks, “Applied Mathematics” and “More Applied Mathematics”. So I got to know the subject of today’s A To Z very well. It’s worth your knowing too.

Boundary Value Problems.

I’ve talked about differential equations before. I’ll talk about them again. They’re important. They might be the most directly useful sort of higher mathematics. They turn up naturally whenever you have a system whose changes depend on the current state of things.

There are many kinds of differential equations problems. The ones that come first to mind, and that students first learn, are “initial value problems”. In these, you’re given some system, told how it changes in time, and told what things are at a start. There’s good reasons to do that. It’s conceptually easy. It describes all sorts of systems where something moves. Think of your classic physics problems of a ball being tossed in the air, or a weight being put on a spring, or a planet orbiting a sun. These are classic initial value problems. They almost look like natural experiments. Set a thing up and watch what happens.

They’re not everything. There’s another class of problems at least as important. Maybe more important. In these we’re given how the parts of a system affect one another. And we’re told some information about the edges of the system. The boundaries, that is. And these are “boundary value problems”.

Mathematics majors learn them after getting thoroughly trained in and sick of initial value problems. There’s reasons for that. First is that they almost need to be about problems with multiple variables. You can set one up for, like, a ball tossed in the air. But they’re rarer. Differential equations for multiple variables are harder than differential equations for a single variable, because of course. We have to learn the tools of “partial differential equations”. In these we work out how the system changes if we pretend all but one of the variables is fixed. We combine information about all those changes for each individual changing variable. Lots more, and lots stranger, stuff can happen.

The partial differential equation describes some region. It involves maybe some space, maybe some time, maybe both. There’s a region, called the “domain”, for which the differential equation is true.

For example, maybe we’re interested in the amount of heat in a metal bar as it’s warmed on one end and cooled on another. The domain here is the length of the bar and the time it’s subjected to the heat and cool. Or maybe we’re interested in the amount of water flowing through a section of a river bed. The domain here is the length and width and depth of the river, if we suppose the river isn’t swelling or shrinking or changing much. Maybe we’re intersted in the electric field created by putting a bit of charge on a metal ball. Then the domain is the entire universe except the metal ball and the space inside it. We’re comfortable with boundlessly large domains.

But what makes this a boundary value problem is that we know something about the boundary looks like. Once again a mathematics term is less baffling than you might figure. The boundary is just what it sounds like: the edge of the domain, the part that divides the domain from not-the-domain. The metal bar being heated up has boundaries on either end. The river bed has boundaries at the surface of the water, the banks of the river, and the start and the end of wherever we’re observing. The metal ball has boundaries of the ball’s surface and … uh … the limits of space and time, somewhere off infinitely far away.

There’s all kinds of information we might get about a boundary. What we actually get is one of four kinds. The first kind is “we get told what values the solution should be at the boundary”. Mathematics majors love this because it lets us know we at least have the boundary’s values right. It’s certainly what we learn on first. And it might be most common. If we’re measuring, say, temperature or fluid speed or something like that we feel like we can know what these are. If we need a name we call this “Dirichlet Boundary Conditions”. That’s named for Peter Gustav Lejune Dirichlet. He’s one of those people mathematics majors keep running across. We get stuff named for him in mathematical physics, in probability, in heat, in Fourier series.

The second kind is “we get told what the derivative of the solution should be at the boundary”. Mathematics majors hate this because we’re having a hard enough time solving this already and you want us to worry about the derivative of the solution on the boundary? Give us something we can check, please. But this sort of boundary condition keeps turning up. It comes up, for instance, in the electric field around a conductive metal box, or ball, or plate. The electric field will be, near the metal plate, perpendicular to the conductive metal. Goodness knows what the electric field’s value is, but we know something about how it changes. If we need a name we call this “Neumann Boundary Conditions”. This is not named for the applied mathematician/computer scientist/physicist John von Neumann. Nobody remembers the Neumann it is named for, who was Carl Neumann.

The third kind is called “Robin boundary conditions” if someone remembers the name for it. It’s slightly named for Victor Gustave Robin. In these we don’t necessarily know the value the solution should have on the boundary. And we don’t know what the derivative of the solution on the boundary should be. But we do know some linear combination of them. That is, we know some number times the original value plus some (possibly other) number times the derivative. Mathematics majors loathe this one because the Neumann boundary conditions were hard enough and now we have this? They turn up in heat and diffusion problems, when there’s something limiting the flow of whatever you’re studying into and out of the region.

And the last kind is called “mixed boundary conditions” as, I don’t know, nobody seems to have got their name attached to it. In this we break up the boundary. For some of it we get, say, Dirichlet boundary conditions. For some of the boundary we get, say, Neumann boundary conditions. Or maybe we have Robin boundary conditions for some of the edge and Dirichlet for others. Whatever. This mathematics majors get once or twice, as punishment for their sinful natures, and then we try never to think of them again because of the pain. Sometimes it’s the only approach that fits the problem. Still hurts.

We see boundary value problems when we do things like blow a soap bubble using weird wireframes and ponder the shape. Or when we mix hot coffee and cold milk in a travel mug and ponder how the temperatures mix. Or when we see a pipe squeezing into narrower channels and wonder how this affects the speed of water flowing into and out of it. Often these will be problems about how stuff over a region, maybe of space and maybe of time, will settle down to some predictable, steady pattern. This is why it turns up all over applied mathematics problems, and why in grad school we got to know them so very well.