My 2019 Mathematics A To Z: Martingales

Today’s A To Z term was nominated again by @aajohannas. The other compelling nomination was from Vayuputrii, for the Mittag-Leffler function. I was tempted. But I realized I could not think of a clear way to describe why the function was interesting. Or even where it comes from that avoided being a heap of technical terms. There’s no avoiding technical terms in writing about mathematics, but there’s only so much I want to put in at once either. It also makes me realize I don’t understand the Mittag-Leffler function, but it is after all something I haven’t worked much with.

The Mittag-Leffler function looks like it’s one of those things named for several contributors, like Runge-Kutta Integration or Cauchy-Kovalevskaya Theorem or something. Not so here; this was one person, Gösta Mittag-Leffler. His name’s all over the theory of functions. And he was one of the people helping Sofia Kovalevskaya, whom you know from every list of pioneering women in mathematics, secure her professorship.

Martingales.

A martingale is how mathematicians prove you can’t get rich gambling.

Well, that exaggerates. Some people will be lucky, of course. But there’s no strategy that works. The only strategy that works is to rig the game. You can do this openly, by setting rules that give you a slight edge. You usually have to be the house to do this. Or you can do it covertly, using tricks like card-counting (in blackjack) or weighted dice or other tricks. But a fair game? Meaning one not biased towards or against any player? There’s no strategy to guarantee winning that.

We can make this more technical. Martingales arise from the world of stochastic processes. This is an indexed set of random variables. A random variable is some variable with a value that depends on the result of some phenomenon. A tossed coin. Rolled dice. Number of people crossing a particular walkway over a day. Engine temperature. Value of a stock being traded. Whatever. We can’t forecast what the next value will be. But we know the distribution, which values are more likely and which ones are unlikely and which ones impossible.

The field grew out of studying real-world phenomena. Things we could sample and do statistics on. So it’s hard to think of an index that isn’t time, or some proxy for time like “rolls of the dice”. Stochastic processes turn up all over the place. A lot of what we want to know is impossible, or at least impractical, to exactly forecast. Think of the work needed to forecast how many people will cross this particular walk four days from now. But it’s practical to describe what are more and less likely outcomes. What the average number of walk-crossers will be. What the most likely number will be. Whether to expect tomorrow to be a busier or a slower day.

And this is what the martingale is for. Start with a sequence of your random variables. How many people have crossed that street each day since you started studying. What is the expectation value, the best guess, for the next result? Your best guess for how many will cross tomorrow? Keeping in mind your knowledge of how all these past values. That’s an important piece. It’s not a martingale if the history of results isn’t a factor.

Every probability question has to deal with knowledge. Sometimes it’s easy. The probability of a coin coming up tails next toss? That’s one-half. The probability of a coin coming up tails next toss, given that it came up tails last time? That’s still one-half. The probability of a coin coming up tails next toss, given that it came up tails the last 40 tosses? That’s … starting to make you wonder if this is a fair coin. I’d bet tails, but I’d also ask to examine both sides, for a start.

So a martingale is a stochastic process where we can make forecasts about the future. Particularly, the expectation value. The expectation value is the sum of the products of every possible value and how probable they are. In a martingale, the expected value for all time to come is just the current value. So if whatever it was you’re measuring was, say, 40 this time? That’s your expectation for the whole future. Specific values might be above 40, or below 40, but on average, 40 is it.

Put it that way and you’d think, well, how often does that ever happen? Maybe some freak process will give you that, but most stuff?

Well, here’s one. The random walk. Set a value. At each step, it can increase or decrease by some fixed value. It’s as likely to increase as to decrease. This is a martingale. And it turns out a lot of stuff is random walks. Or can be processed into random walks. Even if the original walk is unbalanced — say it’s more likely to increase than decrease. Then we can do a transformation, and find a new random variable based on the original. Then that one is as likely to increase as decrease. That one is a martingale.

It’s not just random walks. Poisson processes are things where the chance of something happening is tiny, but it has lots of chances to happen. So this measures things like how many car accidents happen on this stretch of road each week. Or where a couple plants will grow together into a forest, as opposed to lone trees. How often a store will have too many customers for the cashiers on hand. These processes by themselves aren’t often martingales. But we can use them to make a new stochastic process, and that one is a martingale.

Where this all comes to gambling is in stopping times. This is a random variable that’s based on the stochastic process you started with. Its value at each index represents the probability that the random variable in that has reached some particular value by this index. The language evokes a gambler’s decision: when do you stop? There are two obvious stopping times for any game. One is to stop when you’ve won enough money. The other is to stop when you’ve lost your whole stake.

So there is something interesting about a martingale that has bounds. It will almost certainly hit at least one of those bounds, in a finite time. (“Almost certainly” has a technical meaning. It’s the same thing I mean when I say if you flip a fair coin infinitely many times then “almost certainly” it’ll come up tails at least once. Like, it’s not impossible that it doesn’t. It just won’t happen.) And for the gambler? The boundary of “runs out of money” is a lot closer than “makes the house run out of money”.

Oh, if you just want a little payoff, that’s fine. If you’re happy to walk away from the table with a one percent profit? You can probably do that. You’re closer to that boundary than to the runs-out-of-money one. A ten percent profit? Maybe so. Making an unlimited amount of money, like you’d want to live on your gambling winnings? No, that just doesn’t happen.

This gets controversial when we turn from gambling to the stock market. Or a lot of financial mathematics. Look at the value of a stock over time. I write “stock” for my convenience. It can be anything with a price that’s constantly open for renegotiation. Stocks, bonds, exchange funds, used cars, fish at the market, anything. The price over time looks like it’s random, at least hour-by-hour. So how can you reliably make money if the fluctuations of the price of a stock are random?

Well, if I knew, I’d have smaller student loans outstanding. But martingales seem like they should offer some guidance. Much of modern finance builds on not dealing with a stock price varying. Instead, buy the right to buy the stock at a set price. Or buy the right to sell the stock at a set price. This lets you pay to secure a certain profit, or a worst-possible loss, in case the price reaches some level. And now you see the martingale. Is it likely that the stock will reach a certain price within this set time? How likely? This can, in principle, guide you to a fair price for this right-to-buy.

The mathematical reasoning behind that is fine, so far as I understand it. Trouble arises because pricing correctly means having a good understanding of how likely it is prices will reach different levels. Fortunately, there are few things humans are better at than estimating probabilities. Especially the probabilities of complicated situations, with abstract and remote dangers.

So martingales are an interesting corner of mathematics. They apply to purely abstract problems like random walks. Or to good mathematical physics problems like Brownian motion and the diffusion of particles. And they’re lurking behind the scenes of the finance news. Exciting stuff.

Thanks for reading. This and all the other Fall 2019 A To Z posts should be at this link. Yes, I too am amazed to be halfway done; it feels like I’m barely one-fifth of the way done. For Thursday I hope to publish ‘N’. And I am taking nominations for subjects for the letters O through T, at this link.

Reading the Comics, January 5, 2019: Start of the Year Edition

With me wrapping up the mathematically-themed comic strips that ran the first of the year, you can see how far behind I’m falling keeping everything current. In my defense, Monday was busier than I hoped it would be, so everything ran late. Next week is looking quite slow for comics, so maybe I can catch up then. I will never catch up on anything the rest of my life, ever.

Scott Hilburn’s The Argyle Sweater for the 2nd is a bit of wordplay about regular and irregular polygons. Many mathematical constructs, in geometry and elsewhere, come in “regular” and “irregular” forms. The regular form usually has symmetries that make it stand out. For polygons, this is each side having the same length, and each interior angle being congruent. Irregular is everything else. The symmetries which constrain the regular version of anything often mean we can prove things we otherwise can’t. But most of anything is the irregular. We might know fewer interesting things about them, or have a harder time proving them.

I’m not sure what the teacher would be asking for in how to “make an irregular polygon regular”. I mean if we pretend that it’s not setting up the laxative joke. I can think of two alternatives that would make sense. One is to draw a polygon with the same number of sides and the same perimeter as the original. The other is to draw a polygon with the same number of sides and the same area as the original. I’m not sure of the point of either. I suppose polygons of the same area have some connection to quadrature, that is, integration. But that seems like it’s higher-level stuff than this class should be doing. I hate to question the reality of a comic strip but that’s what I’m forced to do.

Bud Fisher’s Mutt and Jeff rerun for the 4th is a gambler’s fallacy joke. Superficially the gambler’s fallacy seems to make perfect sense: the chance of twelve bad things in a row has to be less than the chance of eleven bad things in a row. So after eleven bad things, the twelfth has to come up good, right? But there’s two ways this can go wrong.

Suppose each attempted thing is independent. In this case, what if each patient is equally likely to live or die, regardless of what’s come before? And in that case, the eleven deaths don’t make it more likely that the next will live.

Suppose each attempted thing is not independent, though. This is easy to imagine. Each surgery, for example, is a chance for the surgeon to learn what to do, or not do. He could be getting better, that is, more likely to succeed, each operation. Or the failures could reflect the surgeon’s skills declining, perhaps from overwork or age or a loss of confidence. Impossible to say without more data. Eleven deaths on what context suggests are low-risk operations suggest a poor chances of surviving any given surgery, though. I’m on Jeff’s side here.

Mark Anderson’s Andertoons for the 5th is a welcome return of Wavehead. It’s about ratios. My impression is that ratios don’t get much attention in themselves anymore, except to dunk on stupid Twitter comments. It’s too easy to jump right into fractions, and division. Ratios underlie this, at least historically. It’s even in the name, ‘rational numbers’.

Wavehead’s got a point in literally comparing apples and oranges. It’s at least weird to compare directly different kinds of things. This is one of those conceptual gaps between ancient mathematics and modern mathematics. We’re comfortable stripping the units off of numbers, and working with them as abstract entities. But that does mean we can calculate things that don’t make sense. This produces the occasional bit of fun on social media where we see something like Google trying to estimate a movie’s box office per square inch of land in Australia. Just because numbers can be combined doesn’t mean they should be.

Larry Wright’s Motley rerun for the 5th has the form of a story problem. And one timely to the strip’s original appearance in 1987, during the National Football League players strike. The setup, talking about the difference in weekly pay between the real players and the scabs, seems like it’s about the payroll difference. The punchline jumps to another bit of mathematics, the point spread. Which is an estimate of the expected difference in scoring between teams. I don’t know for a fact, but would imagine the scab teams had nearly meaningless point spreads. The teams were thrown together extremely quickly, without much training time. The tools to forecast what a team might do wouldn’t have the data to rely on.

The at-least-weekly appearances of Reading the Comics in these pages are at this link.

Reading the Comics, October 14, 2016: Classics Edition

The mathematically-themed comic strips of the past week tended to touch on some classic topics and classic motifs. That’s enough for me to declare a title for these comics. Enjoy, won’t you please?

John McPherson’s Close To Home for the 9th uses the classic board full of mathematics to express deep thinking. And it’s deep thinking about sports. Nerds like to dismiss sports as trivial and so we get the punch line out of this. But models of sports have been one of the biggest growth fields in mathematics the past two decades. And they’ve shattered many longstanding traditional understandings of strategy. It’s not proper mathematics on the board, but that’s all right. It’s not proper sabermetrics either.

Vic Lee’s Pardon My Planet for the 10th is your classic joke about putting mathematics in marketable terms. There is an idea that a mathematical idea to be really good must be beautiful. And it’s hard to say exactly what beauty is, but “short” and “simple” seem to be parts of it. That’s a fine idea, as long as you don’t forget how context-laden these are. Whether an idea is short depends on what ideas and what concepts you have as background. Whether it’s simple depends on how much you’ve seen similar ideas before. π looks simple. “The smallest positive root of the solution to the differential equation y”(x) = -y(x) where y(0) = 0 and y'(0) = 1” looks hard, but however much mathematics you know, rhetoric alone tells you those are the same thing.

Scott Hilburn’s The Argyle Sweater for the 10th is your classic anthropomorphic-numerals joke. Well, anthropomorphic-symbols in this case. But it’s the same genre of joke.

Randy Glasbergen’s Glasbergen Cartoons rerun for the 10th is your classic sudoku-and-arithmetic-as-hard-work joke. And it’s neat to see “programming a VCR” used as an example of the difficult-to-impossible task for a comic strip drawn late enough that it’s into the era of flat-screen, flat-bodied desktop computers.

Bill Holbrook’s On The Fastrack for 11th is your classic grumbling-about-how-mathematics-is-understood joke. Well, statistics, but most people consider that part of mathematics. (One could mount a strong argument that statistics is as independent of mathematics as physics or chemistry are.) Statistics offers many chances for intellectual mischief, whether deliberately or just from not thinking matters through. That may be inevitable. Sampling, as in political surveys, must talk about distributions, about ranges of possible results. It’s hard to be flawless about that.

That said I’m not sure I can agree with Fi in her example here. Take her example, a political poll with three-point margin of error. If the poll says one candidate’s ahead by three points, Fi asserts, they’ll say it’s tied when it’s as likely the lead is six. I don’t see that’s quite true, though. When we sample something we estimate the value of something in a population based on what it is in the sample. Obviously we’ll be very lucky if the population and the sample have exactly the same value. But the margin of error gives us a range of how far from the sample value it’s plausible the whole population’s value is, or would be if we could measure it. Usually “plausible” means 95 percent; that is, 95 percent of the time the actual value will be within the margin of error of the sample’s value.

So here’s where I disagree with Fi. Let’s suppose that the first candidate, Kirk, polls at 43 percent. The second candidate, Picard, polls at 40 percent. (Undecided or third-party candidates make up the rest.) I agree that Kirk’s true, whole-population, support is equally likely to be 40 percent or 46 percent. But Picard’s true, whole-population, support is equally likely to be 37 percent or 43 percent. Kirk’s lead is actually six points if his support was under-represented in the sample and Picard’s was over-represented, by the same measures. But suppose Kirk was just as over-represented and Picard just as under-represented as they were in the previous case. This puts Kirk at 40 percent and Picard at 43 percent, a Kirk-lead of minus three percentage points.

So what’s the actual chance these two candidates are tied? Well, you have to say what a tie is. It’s vanishingly impossible they have precisely the same true support and we can’t really calculate that. Don’t blame statisticians. You tell me an election in which one candidate gets three more votes than the other isn’t really tied, if there are more than seven votes cast. We can work on “what’s the chance their support is less than some margin?” And then you’d have all the possible chances where Kirk gets a lower-than-surveyed return while Picard gets a higher-than-surveyed return. I can’t say what that is offhand. We haven’t said what this margin-of-tying is, for one thing.

But it is certainly higher than the chance the lead is actually six; that only happens if the actual vote is different from the poll in one particular way. A tie can happen if the actual vote is different from the poll in many different ways.

Doing a quick and dirty little numerical simulation suggests to me that, if we assume the sampling respects the standard normal distribution, then in this situation Kirk probably is ahead of Picard. Given a three-point lead and a three-point margin for error Kirk would be expected to beat Picard about 92 percent of the time, while Picard would win about 8 percent of the time.

Here I have been making the assumption that Kirk’s and Picard’s support are to an extent independent. That is, a vote might be for Kirk or for Picard or for neither. There’s this bank of voting-for-neither-candidate that either could draw on. If there are no undecided candidates, every voter is either Kirk or Picard, then all of this breaks down: Kirk can be up by six only if Picard is down by six. But I don’t know of surveys that work like that.

Not to keep attacking this particular strip, which doesn’t deserve harsh treatment, but it gives me so much to think about. Assuming by “they” Fi means news anchors — and from what we get on panel, it’s not actually clear she does — I’m not sure they actually do “say the poll is tied”. What I more often remember hearing is that the difference is equal to, or less than, the survey’s margin of error. That might get abbreviated to “a statistical tie”, a usage that I think is fair. But Fi might mean the candidates or their representatives in saying “they”. I can’t fault the campaigns for interpreting data in ways useful for their purposes. The underdog needs to argue that the election can yet be won. The leading candidate needs to argue against complacency. In either case a tie is a viable selling point and a reasonable interpretation of the data.

Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde for the 12th is a classic use of Einstein and general relativity to explain human behavior. Everyone’s tempted by this. Usually it’s thermodynamics that inspires thoughts that society could be explained mathematically. There’s good reason for this. Thermodynamics builds great and powerful models of complicated systems by supposing that we never know, or need to know, what any specific particle of gas or fluid is doing. We care only about aggregate data. That statistics shows we can understand much about humanity without knowing fine details reinforces this idea. The Wingartens and Clark probably shifted from thermodynamics to general relativity because Einstein is recognizable to normal people. And we’ve all at least heard of mass warping space and can follow the metaphor to money warping law.

In vintage comics, Dan Barry’s Flash Gordon for the 14th (originally run the 28th of November, 1961) uses the classic idea that sufficient mathematics talent will outwit games of chance. Many believe it. I remember my grandmother’s disappointment that she couldn’t bring the underaged me into the casinos in Atlantic City. This did save her the disappointment of learning I haven’t got any gambling skill besides occasionally buying two lottery tickets if the jackpot is high enough. I admit that an irrational move on my part, but I can spare two dollars for foolishness once or twice a year. The idea of beating a roulette wheel, at least a fair wheel, isn’t absurd. In principle if you knew enough about how the wheel was set up and how the ball was weighted and how it was launched into the spin you could predict where it would land. In practice, good luck. I wouldn’t be surprised if a good roulette wheel weren’t chaotic, or close to it. If it’s chaotic then while the outcome could be predicted if the wheel’s spin and the ball’s initial speed were known well enough, they can’t be measured well enough for a prediction to be meaningful. The comic also uses the classic word balloon full of mathematical symbols to suggest deep reasoning. I spotted Einstein’s famous quote there.

Reading the Comics, November 4, 2015: Gambling Edition

I don’t presume to guess why. But Comic Strip Master Command sent out orders one lead-time ago to have everybody do jokes that relate to gambling. We see the consequences here.

John Rose’s Barney Google and Snuffy Smith for the 2nd of November builds its joke on the idea that the mathematics of gambling is all anyone really needs. It’s a better-than-average crack about the usefulness of mathematics. It’s also truer than average. Much of how we make decisions is built on the expectation value, a core concept of probability. If we do this, what can we expect to gain or lose? If we do that instead, what would we expect? If we can place a value — even a loose, approximate value — on our time, our money, our experiences, we gain a new tool for making decisions.

Probability runs through the history of mathematics. That’s euphemistic. Gambling runs through the history of mathematics. Quite a bit of what we call probability derives from people who wanted to better understand games of chance, and to get an edge in the bets they might place. A question like “how many ways can three dice come up?” is a good homework problem today. It was once a subject of serious study and argument. We realize it’s still a good question when we wonder if the first die coming up 6, the second 3, and and third 1 is a different outcome from the first die coming up 3, the second 1, and the third 6.

Fully understanding the mathematics of gambling requires not just counting and not just fractions. It will bring us to algebra, to calculus, and to all the tools that let us understand thermodynamics and quantum mechanics. If that isn’t everything, that is a good rough approximation.

Scott Adams’s Dilbert Classic for the 2nd of November originally ran the 8th of September, 1992. It’s about a sadly common kind of nerd behavior, the desire to one-up one’s stories of programming hardship. In this one the generic guy — a different figure from Adams’s current model of generic guy — asserts he goes back to before binary numbers, even. I admit skepticism. Certainly you could list different numbers by making the same symbol often enough. We do that when we resort to tally marks. But we need some second symbol to note the end of a number. With tally marks we can do that with physical space. A computer’s memory, though? That needs something else.

Kevin Fagan’s Drabble began a story about the logic of buying a lottery ticket this week. (The story goes on several days past this.) This is another probability, that is gambling, problem. Large jackpots present a pretty good philosophical challenge. It’s possible the jackpot will be so large that the expected value of buying a ticket is positive. This would seem to imply you should buy a ticket. But your chance of winning will be, as ever, vanishingly small. One chance in 200 million or more. You will not win. This would seem to imply you should not buy a ticket. Both are hard arguments to refute. I admit that when the jackpot gets sufficiently large, I’ll buy one or two tickets. I don’t expect to win the \$200 million jackpot or anything like that, though. I’ll be content if I can secure a cozy little \$25,000 minor prize. But I might just get a long john doughnut instead.

Larry Wright’s Motley for the 2nd of November originally ran that day in 1987. It name-drops E = mc2 as shorthand for genius, the equation’s general role.

Doug Bratton’s Pop Culture Shock Therapy for the 3rd of November doesn’t mention E = mc2, but it is an Albert Einstein joke. It doesn’t build on the comforting but dubious legend of Einstein being a poor student. That’s an unusual direction.

Eric the Circle for the 3rd of November is by “Shane”. It’s a cute joke: if Eric were in a horserace, how would his lead be measured? Obviously, by comparison to his diameter. I doubt the race caller would need so many digits past the decimal, though. If cartoons and old-time radio sitcoms about horseracing haven’t led me wrong, distances are measured in a couple common fractions of a horse length — a half, a quarter, three-quarters and so on. So surely Eric would be called “about seven radii” or “three and a half diameters” ahead. It would make sense if his lead were measured by circumferences, if he’s rolling along. But it can be surprisingly hard to estimate by eye what the circumference of a circle is. Diameters are easier.

Jonathan Lemon’s Rabbits Against Magic for the 4th of November has a M&oum;bius strip joke. Obviously, though, what’s taking so long is that Eightball’s spare tire isn’t even on the rim. This is bad.

John Zakour and Scott Roberts’s Working Daze for the 4th of November is a variation on the joke about mathematicians being lousy at arithmetic. Here it’s an accountant who’s bad. I am reminded of the science fiction great Arthur C Clarke mentioning his time as an accounts auditor. He supposed that as long as figures added up approximately, to something like one percent, then there probably wasn’t anything requiring further scrutiny going on. He was able to finish his day’s work quickly, and went on to other jobs in time. Bob Newhart also claimed to not demand too much precision in the accounts he was overseeing. He then went on to sell comedy records to radio stations for a fair bit less than they cost to produce, so perhaps he was better off not working on the money side of things.

Reading the Comics, January 29, 2015: Returned Motifs Edition

I do occasionally worry that my little blog is going to become nothing but a review of mathematics-themed comic strips, especially when Comic Strip Master Command sends out abundant crops like it has the past few weeks. This week’s offerings bring out the return of a lot of familiar motifs, like fighting with word problems and anthropomorphized numbers; and there’s one strip that suggests a pair of articles I wrote a while back might be useful yet.

Bill Amend’s FoxTrot (January 25, and not a rerun) puts out a little word problem, about what grade one needs to get a B in this class, in the sort of passive-aggressive sniping teachers long to get away with. As Paige notes, it really isn’t a geometry problem, although I wonder if there’s a sensible way to represent it as a geometry problem.

Ruben Bolling’s Super-Fun-Pax Comix superstar Chaos Butterfly appears not just in the January 25th installment but also gets a passing mention in Mark Heath’sSpot the Frog (January 29, rerun). Chaos Butterfly in all its forms seems to be popping up a lot lately; I wonder if it’s something in the air.

Reading the Comics, July 28, 2014: Homework in an Amusement Park Edition

I don’t think my standards for mathematics content in comic strips are seriously lowering, but the strips do seem to be coming pretty often for the summer break. I admit I’m including one of these strips just because it lets me talk about something I saw at an amusement park, though. I have my weaknesses.

Harley Schwadron’s 9 to 5 (July 25) builds its joke around the ambiguity of saying a salary is six (or some other number) of figures, if you don’t specify what side of the decimal they’re on. That’s an ordinary enough gag, although the size of a number can itself be an interesting thing to know. The number of digits it takes to write a number down corresponds, roughly, with the logarithm of a number, and in the olden days a lot of computations depended on logarithms: multiplying two numbers is equivalent to adding their logarithms; dividing two numbers, subtracting their logarithms. And addition and subtraction are normally easier than multiplication and division. Similarly, raising one number to a power becomes multiplying one number by the logarithm of another, and multiplication is easier than exponentiation. So counting the number of digits in a number might be something anyway.

Steve Breen and Mike Thompson’s Grand Avenue (July 25) has the kids mention something as being “like going to an amusement park to do math homework”, which gives me a chance to share this incident. Last year my love and I were in the Cedar Point amusement park (in Sandusky, Ohio), and went to the coffee shop. We saw one guy sitting at a counter, with his laptop and a bunch of papers sprawled out, looking pretty much like we do when we’re grading papers, and we thought initially that it was so very sad that someone would be so busy at work that (we presumed) he couldn’t even really participate in the family expedition to the amusement park.

And then we remembered: not everybody lives a couple hours away from an amusement park. If we lived, say, fifteen minutes from a park we had season passes to, we’d certainly at least sometimes take our grading work to the park, so we could get it done in an environment we liked and reward ourselves for getting done with a couple roller coasters and maybe the Cedar Downs carousel (which is worth an entry around these parts anyway). To grade, anyway; I’d never have the courage to bring my laptop to the coffee shop. So I guess all I’m saying is, I have a context in which yes, I could imagine going to an amusement park to grade math homework at least.

Wulff and Morgenthaler Truth Facts (July 25) makes a Venn diagram joke in service of asserting that only people who don’t understand statistics would play the lottery. This is an understandable attitude of Wulff and Morgenthaler, and of many, many people who make the same claim. The expectation value — the amount you expect to win some amount, times the probability you will win that amount, minus the cost of the ticket — is negative for all but the most extremely oversized lottery payouts, and the most extremely oversized lottery payouts still give you odds of winning so tiny that you really aren’t hurting your chances by not buying a ticket. However, the smugness behind the attitude bothers me — I’m generally bothered by smugness — and jokes like this one contain the assumption that the only sensible way to live is a ruthless profit-and-loss calculation to life that even Jeremy Bentham might say is a bit much. For the typical person, buying a lottery ticket is a bit of a lark, a couple dollars of disposable income spent because, what the heck, it’s about what you’d spend on one and a third sodas and you aren’t that thirsty. Lottery pools with coworkers or friends make it a small but fun social activity, too. That something is a net loss of money does not mean it is necessarily foolish. (This isn’t to say it’s wise, either, but I’d generally like a little more sympathy for people’s minor bits of recreational foolishness.)

Marc Anderson’s Andertoons (July 27) does a spot of wordplay about the meaning of “aftermath”. I can’t think of much to say about this, so let me just mention that Florian Cajori’s A History of Mathematical Notations reports (section 201) that the + symbol for addition appears to trace from writing “et”, meaning and, a good deal and the letters merging together and simplifying from that. This seems plausible enough on its face, but it does cause me to reflect that the & symbol also is credited as a symbol born from writing “et” a lot. (Here, picture writing Et and letting the middle and lower horizontal strokes of the E merge with the cross bar and the lowest point of the t.)

Berkeley Breathed’s Bloom County (July 27, rerun from, I believe, July of 1988) is one of the earliest appearances I can remember of the Grand Unification appearing in popular culture, certainly in comic strips. Unifications have a long and grand history in mathematics and physics in explaining things which look very different by the same principles, with the first to really draw attention probably being Descartes showing that algebra and geometry could be understood as a single thing, and problems difficult in one field could be easy in the other. In physics, the most thrilling unification was probably the explaining of electricity, magnetism, and light as the same thing in the 19th century; being able to explain many varied phenomena with some simple principles is just so compelling. General relativity shows that we can interpret accelerations and gravitation as the same thing; and in the late 20th century, physicists found that it’s possible to use a single framework to explain both electromagnetism and the forces that hold subatomic particles together and that break them apart.

It’s not yet known how to explain gravity and quantum mechanics in the same, coherent, frame. It’s generally assumed they can be reconciled, although I suppose there’s no logical reason they have to be. Finding a unification — or a proof they can’t be unified — would certainly be one of the great moments of mathematical physics.

The idea of the grand unification theory as an explanation for everything is … well, fair enough. A grand unification theory should be able to explain what particles in the universe exist, and what forces they use to interact, and from there it would seem like the rest of reality is details. Perhaps so, but it’s a long way to go from a simple starting point to explaining something as complicated as a penguin. I guess what I’m saying is I doubt Oliver would notice the non-existence of Opus in the first couple pages of his work.

Thom Bluemel’s Birdbrains (July 28) takes us back to the origin of numbers. It also makes me realize I don’t know what’s the first number that we know of people discovering. What I mean is, it seems likely that humans are just able to recognize a handful of numbers, like one and two and maybe up to six or so, based on how babies and animals can recognize something funny if the counts of small numbers of things don’t make sense. And larger numbers were certainly known to antiquity; probably the fact that numbers keep going on forever was known to antiquity. And some special numbers with interesting or difficult properties, like pi or the square root of two, were known so long ago we can’t say who discovered them. But then there are numbers like the Euler-Mascheroni constant, which are known and recognized as important things, and we can say reasonably well who discovered them. So what is the first number with a known discoverer?

Reading the Comics, July 24, 2014: Math Is Just Hard Stuff, Right? Edition

Maybe there is no pattern to how Comic Strip Master Command directs the making of mathematics-themed comic strips. It hasn’t quite been a week since I had enough to gather up again. But it’s clearly the summertime anyway; the most common theme this time seems to be just that mathematics is some hard stuff, without digging much into particular subjects. I can work with that.

Pab Sungenis’s The New Adventures of Queen Victoria (July 19) brings in Erwin Schrödinger and his in-strip cat Barfly for a knock-knock joke about proof, with Andrew Wiles’s name dropped probably because he’s the only person who’s gotten to be famous for a mathematical proof. Wiles certainly deserves fame for proving Fermat’s Last Theorem and opening up what I understand to be a useful new field for mathematical research (Fermat’s Last Theorem by itself is nice but unimportant; the tools developed to prove it, though, that’s worthwhile), but remembering only Wiles does slight Richard Taylor, whose help Wiles needed to close a flaw in his proof.

Incidentally I don’t know why the cat is named Barfly. It has the feel to me of a name that was a punchline for one strip and then Sungenis felt stuck with it. As Thomas Dye of the web comic Newshounds said, “Joke names’ll kill you”. (I’m inclined to think that funny names can work, as the Marx Brotehrs, Fred Allen, and Vic and Sade did well with them, but they have to be a less demanding kind of funny.)

John Deering’s Strange Brew (July 19) uses a panel full of mathematical symbols scrawled out as the representation of “this is something really hard being worked out”. I suppose this one could also be filed under “rocket science themed comics”, but it comes from almost the first problem of mathematical physics: if you shoot something straight up, how long will it take to fall back down? The faster the thing starts up, the longer it takes to fall back, until at some speed — the escape velocity — it never comes back. This is because the size of the gravitational attraction between two things decreases as they get farther apart. At or above the escape velocity, the thing has enough speed that all the pulling of gravity, from the planet or moon or whatever you’re escaping from, will not suffice to slow the thing down to a stop and make it fall back down.

The escape velocity depends on the size of the planet or moon or sun or galaxy or whatever you’re escaping from, of course, and how close to the surface (or center) you start from. It also assumes you’re talking about the speed when the thing starts flying away, that is, that the thing doesn’t fire rockets or get a speed boost by flying past another planet or anything like that. And things don’t have to reach the escape velocity to be useful. Nothing that’s in earth orbit has reached the earth’s escape velocity, for example. I suppose that last case is akin to how you can still get some stuff done without getting out of the recliner.

Mel Henze’s Gentle Creatures (July 21) uses mathematics as the standard for proving intelligence exists. I’ve got a vested interest in supporting that proposition, but I can’t bring myself to say more than that it shows a particular kind of intelligence exists. I appreciate the equation of the final panel, though, as it can be pretty well generalized.

Bill Holbrook’s Safe Havens (July 22) plays on mathematics’ reputation of being not very much a crowd-pleasing activity. That’s all right, although I think Holbrook makes a mistake by having the arena claim to offer a “lecture on the actual odds of beating the casino”, since the mathematics of gambling is just the sort of mathematics I think would draw a crowd. Probability enjoys a particular sweet spot for popular treatment: many problems don’t require great amounts of background to understand, and have results that are surprising, but which have reasons that are easy to follow and don’t require sophisticated arguments, and are about problems that are easy to imagine or easy to find interesting: cards being drawn, dice being rolled, coincidences being found, or secrets being revealed. I understand Holbrook’s editorial cartoon-type point behind the lecture notice he put up, but the venue would have better scared off audiences if it offered a lecture on, say, “Chromatic polynomials for rigidly achiral graphs: new work on Yamada’s invariant”. I’m not sure I could even explain that title in 1200 words.

Missy Meyer’s Holiday Doodles (July 22) revelas to me that apparently the 22nd of July was “Casual Pi Day”. Yeah, I suppose that passes. I didn’t see much about it in my Twitter feed, but maybe I need some more acquaintances who don’t write dates American-fashion.

Thom Bluemel’s Birdbrains (July 24) again uses mathematics — particularly, Calculus — as not just the marker for intelligence but also as The Thing which will decide whether a kid goes on to success in life. I think the dolphin (I guess it’s a dolphin?) parent is being particularly horrible here, as it’s not as if a “B+” is in any way a grade to be ashamed of, and telling kids it is either drives them to give up on caring about grades, or makes them send whiny e-mails to their instructors about how they need this grade and don’t understand why they can’t just do some make-up work for it. Anyway, it makes the kid miserable, it makes the kid’s teachers or professors miserable, and for crying out loud, it’s a B+.

(I’m also not sure whether a dolphin would consider a career at Sea World success in life, but that’s a separate and very sad issue.)

Reading the Comics, March 26, 2014: Kitchen Science Department

It turns out that three of the comic strips to be included in this roundup of mathematics-themed strips mentioned things that could reasonably be found in kitchens, so that’s why I’ve added that as a subtitle. I can’t figure a way to contort the other entries to being things that might be in kitchens, but, given that I don’t get to decide what cartoonists write about I think I’m doing well to find any running themes.

Ralph Hagen’s The Barn (March 19) is built around a possibly accurate bit of trivia which tries to stagger the mind by considering the numinous: how many stars are there? This evokes, to me at least, one of the famous bits of ancient Greek calculations (for which they get much less attention than the geometers and logicians did), as Archimedes made an effort to estimate how many grains of sand could fit inside the universe. Archimedes had apparently little fear of enormous numbers, and had to strain the Greek system for representing numbers to get at such enormous quantities. But he was an ingenious reasoner: he was able to estimate, for example, the sizes and distances to the Moon and the Sun based on observing, with the naked eye, the half-moon; and his work on problems like finding the value of pi get surprisingly close to integral calculus and would probably be a better introduction to the subject than pre-calculus courses are. It’s quite easy in considering how big (and how old) the universe is to get to numbers that are really difficult to envision, so, trying to reduce that by imagining stars as grains of salt might help, if you can imagine a ball of salt eight miles across.