My 2019 Mathematics A To Z: Martingales


Today’s A To Z term was nominated again by @aajohannas. The other compelling nomination was from Vayuputrii, for the Mittag-Leffler function. I was tempted. But I realized I could not think of a clear way to describe why the function was interesting. Or even where it comes from that avoided being a heap of technical terms. There’s no avoiding technical terms in writing about mathematics, but there’s only so much I want to put in at once either. It also makes me realize I don’t understand the Mittag-Leffler function, but it is after all something I haven’t worked much with.

The Mittag-Leffler function looks like it’s one of those things named for several contributors, like Runge-Kutta Integration or Cauchy-Kovalevskaya Theorem or something. Not so here; this was one person, Gösta Mittag-Leffler. His name’s all over the theory of functions. And he was one of the people helping Sofia Kovalevskaya, whom you know from every list of pioneering women in mathematics, secure her professorship.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Martingales.

A martingale is how mathematicians prove you can’t get rich gambling.

Well, that exaggerates. Some people will be lucky, of course. But there’s no strategy that works. The only strategy that works is to rig the game. You can do this openly, by setting rules that give you a slight edge. You usually have to be the house to do this. Or you can do it covertly, using tricks like card-counting (in blackjack) or weighted dice or other tricks. But a fair game? Meaning one not biased towards or against any player? There’s no strategy to guarantee winning that.

We can make this more technical. Martingales arise from the world of stochastic processes. This is an indexed set of random variables. A random variable is some variable with a value that depends on the result of some phenomenon. A tossed coin. Rolled dice. Number of people crossing a particular walkway over a day. Engine temperature. Value of a stock being traded. Whatever. We can’t forecast what the next value will be. But we know the distribution, which values are more likely and which ones are unlikely and which ones impossible.

The field grew out of studying real-world phenomena. Things we could sample and do statistics on. So it’s hard to think of an index that isn’t time, or some proxy for time like “rolls of the dice”. Stochastic processes turn up all over the place. A lot of what we want to know is impossible, or at least impractical, to exactly forecast. Think of the work needed to forecast how many people will cross this particular walk four days from now. But it’s practical to describe what are more and less likely outcomes. What the average number of walk-crossers will be. What the most likely number will be. Whether to expect tomorrow to be a busier or a slower day.

And this is what the martingale is for. Start with a sequence of your random variables. How many people have crossed that street each day since you started studying. What is the expectation value, the best guess, for the next result? Your best guess for how many will cross tomorrow? Keeping in mind your knowledge of how all these past values. That’s an important piece. It’s not a martingale if the history of results isn’t a factor.

Every probability question has to deal with knowledge. Sometimes it’s easy. The probability of a coin coming up tails next toss? That’s one-half. The probability of a coin coming up tails next toss, given that it came up tails last time? That’s still one-half. The probability of a coin coming up tails next toss, given that it came up tails the last 40 tosses? That’s … starting to make you wonder if this is a fair coin. I’d bet tails, but I’d also ask to examine both sides, for a start.

So a martingale is a stochastic process where we can make forecasts about the future. Particularly, the expectation value. The expectation value is the sum of the products of every possible value and how probable they are. In a martingale, the expected value for all time to come is just the current value. So if whatever it was you’re measuring was, say, 40 this time? That’s your expectation for the whole future. Specific values might be above 40, or below 40, but on average, 40 is it.

Put it that way and you’d think, well, how often does that ever happen? Maybe some freak process will give you that, but most stuff?

Well, here’s one. The random walk. Set a value. At each step, it can increase or decrease by some fixed value. It’s as likely to increase as to decrease. This is a martingale. And it turns out a lot of stuff is random walks. Or can be processed into random walks. Even if the original walk is unbalanced — say it’s more likely to increase than decrease. Then we can do a transformation, and find a new random variable based on the original. Then that one is as likely to increase as decrease. That one is a martingale.

It’s not just random walks. Poisson processes are things where the chance of something happening is tiny, but it has lots of chances to happen. So this measures things like how many car accidents happen on this stretch of road each week. Or where a couple plants will grow together into a forest, as opposed to lone trees. How often a store will have too many customers for the cashiers on hand. These processes by themselves aren’t often martingales. But we can use them to make a new stochastic process, and that one is a martingale.

Where this all comes to gambling is in stopping times. This is a random variable that’s based on the stochastic process you started with. Its value at each index represents the probability that the random variable in that has reached some particular value by this index. The language evokes a gambler’s decision: when do you stop? There are two obvious stopping times for any game. One is to stop when you’ve won enough money. The other is to stop when you’ve lost your whole stake.

So there is something interesting about a martingale that has bounds. It will almost certainly hit at least one of those bounds, in a finite time. (“Almost certainly” has a technical meaning. It’s the same thing I mean when I say if you flip a fair coin infinitely many times then “almost certainly” it’ll come up tails at least once. Like, it’s not impossible that it doesn’t. It just won’t happen.) And for the gambler? The boundary of “runs out of money” is a lot closer than “makes the house run out of money”.

Oh, if you just want a little payoff, that’s fine. If you’re happy to walk away from the table with a one percent profit? You can probably do that. You’re closer to that boundary than to the runs-out-of-money one. A ten percent profit? Maybe so. Making an unlimited amount of money, like you’d want to live on your gambling winnings? No, that just doesn’t happen.

This gets controversial when we turn from gambling to the stock market. Or a lot of financial mathematics. Look at the value of a stock over time. I write “stock” for my convenience. It can be anything with a price that’s constantly open for renegotiation. Stocks, bonds, exchange funds, used cars, fish at the market, anything. The price over time looks like it’s random, at least hour-by-hour. So how can you reliably make money if the fluctuations of the price of a stock are random?

Well, if I knew, I’d have smaller student loans outstanding. But martingales seem like they should offer some guidance. Much of modern finance builds on not dealing with a stock price varying. Instead, buy the right to buy the stock at a set price. Or buy the right to sell the stock at a set price. This lets you pay to secure a certain profit, or a worst-possible loss, in case the price reaches some level. And now you see the martingale. Is it likely that the stock will reach a certain price within this set time? How likely? This can, in principle, guide you to a fair price for this right-to-buy.

The mathematical reasoning behind that is fine, so far as I understand it. Trouble arises because pricing correctly means having a good understanding of how likely it is prices will reach different levels. Fortunately, there are few things humans are better at than estimating probabilities. Especially the probabilities of complicated situations, with abstract and remote dangers.

So martingales are an interesting corner of mathematics. They apply to purely abstract problems like random walks. Or to good mathematical physics problems like Brownian motion and the diffusion of particles. And they’re lurking behind the scenes of the finance news. Exciting stuff.


Thanks for reading. This and all the other Fall 2019 A To Z posts should be at this link. Yes, I too am amazed to be halfway done; it feels like I’m barely one-fifth of the way done. For Thursday I hope to publish ‘N’. And I am taking nominations for subjects for the letters O through T, at this link.

Reading the Comics, July 30, 2017: Not Really Mathematics edition


It’s been a busy enough week at Comic Strip Master Command that I’ll need to split the results across two essays. Any other week I’d be glad for this, since, hey, free content. But this week it hits a busy time and shouldn’t I have expected that? The odd thing is that the mathematics mentions have been numerous but not exactly deep. So let’s watch as I make something big out of that.

Mark Tatulli’s Heart of the City closed out its “Math Camp” storyline this week. It didn’t end up having much to do with mathematics and was instead about trust and personal responsibility issues. You know, like stories about kids who aren’t learning to believe in themselves and follow their dreams usually are. Since we never saw any real Math Camp activities we don’t get any idea what they were trying to do to interest kids in mathematics, which is a bit of a shame. My guess would be they’d play a lot of the logic-driven puzzles that are fun but that they never get to do in class. The story established that what I thought was an amusement park was instead a fair, so, that might be anywhere Pennsylvania or a couple of other nearby states.

Rick Kirkman and Jerry Scott’s Baby Blues for the 25th sees Hammie have “another” mathematics worksheet accident. Could be any subject, really, but I suppose it would naturally be the one that hey wait a minute, why is he doing mathematics worksheets in late July? How early does their school district come back from summer vacation, anyway?

Hammie 'accidentally' taps a glass of water on his mathematics paper. Then tears it up. Then chews it. Mom: 'Another math worksheet accident?' Hammie: 'Honest, Mom, I think they're cursed!'
Rick Kirkman and Jerry Scott’s Baby Blues for the 25th of July, 2017 Almost as alarming: Hammie is clearly way behind on his “faking plausible excuses” homework. If he doesn’t develop the skills to make a credible reason why he didn’t do something how is he ever going to dodge texts from people too important not to reply to?

Olivia Walch’s Imogen Quest for the 26th uses a spot of mathematics as the emblem for teaching. In this case it’s a bit of physics. And an important bit of physics, too: it’s the time-dependent Schrödinger Equation. This is the one that describes how, if you know the total energy of the system, and the rules that set its potential and kinetic energies, you can work out the function Ψ that describes it. Ψ is a function, and it’s a powerful one. It contains probability distributions: how likely whatever it is you’re modeling is to have a particle in this region, or in that region. How likely it is to have a particle with this much momentum, versus that much momentum. And so on. Each of these we find by applying a function to the function Ψ. It’s heady stuff, and amazing stuff to me. Ψ somehow contains everything we’d like to know. And different functions work like filters that make clear one aspect of that.

Dan Thompson’s Brevity for the 26th is a joke about Sesame Street‘s Count von Count. Also about how we can take people’s natural aptitudes and delights and turn them into sad, droning unpleasantness in the service of corporate overlords. It’s fun.

Steve Sicula’s Home and Away rerun for the 26th is a misplaced Pi Day joke. It originally ran the 22nd of April, but in 2010, before Pi Day was nearly so much a thing.

Doug Savage’s Savage Chickens for the 26th proves something “scientific” by putting numbers into it. Particularly, by putting statistics into it. Understandable impulse. One of the great trends of the past century has been taking the idea that we only understand things when they are measured. And this implies statistics. Everything is unique. Only statistical measurement lets us understand what groups of similar things are like. Does something work better than the alternative? We have to run tests, and see how the something and the alternative work. Are they so similar that the differences between them could plausibly be chance alone? Are they so different that it strains belief that they’re equally effective? It’s one of science’s tools. It’s not everything which makes for science. But it is stuff easy to communicate in one panel.

Neil Kohney’s The Other End for the 26th is really a finance joke. It’s about the ways the finance industry can turn one thing into a dazzling series of trades and derivative trades. But this is a field that mathematics colonized, or that colonized mathematics, over the past generation. Mathematical finance has done a lot to shape ideas of how we might study risk, and probability, and how we might form strategies to use that risk. It’s also done a lot to shape finance. Pretty much any major financial crisis you’ve encountered since about 1990 has been driven by a brilliant new mathematical concept meant to govern risk crashing up against the fact that humans don’t behave the way some model said they should. Nor could they; models are simplified, abstracted concepts that let hard problems be approximated. Every model has its points of failure. Hopefully we’ll learn enough about them that major financial crises can become as rare as, for example, major bridge collapses or major airplane disasters.

Realistic Modeling


“Economic Realism (Wonkish)”, a blog entry by Paul Krugman in The New York Times, discusses a paper, “Chameleons: The Misuse Of Mathematical Models In Finance And Economics”, by Paul Pfleiderer of Stanford University, which surprises me by including a color picture of a chameleon right there on the front page, and in an academic paper at that, and I didn’t know you could have color pictures included just for their visual appeal in academia these days. Anyway, Pfleiderer discusses the difficulty of what they term filtering, making sure that the assumptions one makes to build a model — which are simplifications and abstractions of the real-world thing in which you’re interested — aren’t too far out of line with the way the real thing behaves.

This challenge, which I think of as verification or validation, is important when you deal with pure mathematical or physical models. Some of that will be at the theoretical stage: is it realistic to model a fluid as if it had no viscosity? Unless you’re dealing with superfluid helium or something exotic like that, no, but you can do very good work that isn’t too far off. Or there’s a classic model of the way magnetism forms, known as the Ising model, which in a very special case — a one-dimensional line — is simple enough that a high school student could solve it. (Well, a very smart high school student, one who’s run across an exotic function called the hyperbolic cosine, could do it.) But that model is so simple that it can’t model the phase change, that, if you warm a magnet up past a critical temperature it stops being magnetic. Is the model no good? If you aren’t interested in the phase change, it might be.

And then there is the numerical stage: if you’ve set up a computer program that is supposed to represent fluid flow, does it correctly find solutions? I’ve heard it claimed that the majority of time spent on a numerical project is spent in validating the results, and that isn’t even simply in finding and fixing bugs in the code. Even once the code is doing perfectly what we mean it to do, it must be checked that what we mean it to do is relevant to what we want to know.

Pfleiderer’s is an interesting paper and I think worth the read; despite its financial mathematics focus (and a brief chat about quantum mechanics) it doesn’t require any particularly specialized training. There’s some discussions of particular financial models, but what’s important are the assumptions being made behind those models, and those are intelligible without prior training in the field.

%d bloggers like this: