# The Most Unlikely NHL Playoff Upsets of the Last Five Years

Nick Emptage, writing for puckprediction.com, has the sort of post which I can’t resist: it’s built on the application of statistics to sports. In this case it’s National Hockey League playoffs, and itself builds on an earlier post about the conditional probabilities of the home-team-advantaged winning a best-of-seven series, to look at the most unlikely playoff wins of the last several years. Since I’m from New Jersey I feel a little irrational pride at the New Jersey Devils being two of the most improbable winners, not least because I remember the Devils in the 1980s when the could lose as many as 200 games per eighty-game season, so seeing them in the playoffs at all is a wondrous thing.

Originally posted on Puck Prediction:

Now that the regular season is winding down and the NHL playoffs are almost upon us, many fans are looking back at the thrilling victories and crushing losses of recent postseasons. Inevitably, these moments are the ones that define sports for fans – the wins that still bring a smile to your face years after the fact, the losses that take you days (if not longer) to get over. But part of what makes them so exciting is the element of randomness that determines who raises the big trophy at season’s end. In the regular season, the hockey gods have time to giveth and taketh away pretty equally across teams, and success is largely determined by the quality of the roster and the coaching. Yet the better team doesn’t always come out ahead in a seven-game series. A handful of unlucky bounces, a cold streak from a key scorer, or a…

View original 1,908 more words

# Stable Marriages and Designing Markets

A few days ago Jeremy Kun with the Math ∩ Programming blog wrote about the problem of stable marriages, by which here is meant pairing off people so that everyone is happy with their pairing. Put like that it almost sounds like the sort of thing people used to complain about in letters to Ann Landers about mathematicians doing foolish things — don’t mathematicians know that feelings matter in this, and, how does this help them teach kids to do arithmetic.

But the problem is just put that way because it’s one convenient representation of a difficult problem. Given a number of agents that can be paired up, and some way of measuring the collection of pairings, how can you select the best pairing? And what do you mean by best? Do you mean the one that maximizes whatever it is you’re measuring? The one that minimizes it (if you’re measuring, say, unhappiness, or cost, or something else you’d want as little of)? Jeremy Kun describes the search for a pairing that’s stable, which requires, in part, coming up with a definition of just what “stable” means.

The work can be put to describe any two-party interaction, which can be marriages, or can be the choice of people where to work and employers who to hire, or can be people deciding what to buy or where to live, all sorts of things where people have preferences and good fits. Once the model’s developed it has more applications than what it was originally meant for, which is part of what makes this a good question. Kun also write a bit bout how to expand the problem so as to handle some more complicated cases, and shows how the problem can be put onto a computer.

Originally posted on Math ∩ Programming:

Here is a fun puzzle. Suppose we have a group of 10 men and 10 women, and each of the men has sorted the women in order of their preference for marriage (that is, a man prefers to marry a woman earlier in his list over a woman later in the list). Likewise, each of the women has sorted the men in order of marriageability. We might ask if there is any way that we, the omniscient cupids of love, can decide who should marry to make everyone happy.

Of course, the word happy is entirely imprecise. The mathematician balks at the prospect of leaving such terms undefined! In this case, it’s quite obvious that not everyone will get their first pick. Indeed, if even two women prefer the same man someone will have to settle for less than their top choice. So if we define happiness in this naive way…

View original 2,627 more words

# Weightlessness at the Equator (Whiteboard Sketch #1)

The mathematics blog Scientific Finger Food has an interesting entry, “Weightlessness at the Equator (Whiteboard Sketch #1)”, which looks at the sort of question that’s easy to imagine when you’re young: since gravity pulls you to the center of the earth, and the earth’s spinning pushes you away (unless we’re speaking precisely, but you know what that means), so, how fast would the planet have to spin so that a person on the equator wouldn’t feel any weight?

It’s a straightforward problem, one a high school student ought to be able to do. Sebastian Templ works the problem out, though, including the all-important diagram that shows the important part, which is what calculation to do.

In reality, the answer doesn’t much matter since a planet spinning nearly fast enough to allow for weightlessness at the equator would be spinning so fast it couldn’t hold itself together, and a more advanced version of this problem could make use of that: given some measure of how strongly rock will hold itself together, what’s the fastest that the planet can spin before it falls apart? And a yet more advanced course might work out how other phenomena, such as tides or the precession of the poles might work. Eventually, one might go on to compose highly-regarded works of hard science fiction, if you’re willing to start from the questions easy to imagine when you’re young.

Originally posted on scientific finger food:

At the present time, our Earth does a full rotation every 24 hours, which results in day and night. Just like on a carrousel, its inhabitants (and, by the way, all the other stuff on and of the planet) are pushed “outwards” due to the centrifugal force. So we permanently feel an “upwards” pulling force thanks to the Earth’s rotation. However, the centrifugal force is much weaker than the centri petal force, which is directed towards the core of the planet and usually called “gravitation”. If this wasn’t the case, we would have serious problems holding our bodies down to the ground. (The ground, too, would have troubles holding itself “to the ground”.)

Especially on the equator, the centrifugal and the gravitational force are antagonistic forces: the one points “downwards” while the other points “upwards”.

# How fast would the Earth have to spin in order to cause weightlessness at the…

View original 201 more words

# Thomas Hobbes and Doing of Important Mathematics

One of my mathematics-trivia-of-the-day Twitter feeds mentioned that Saturday was the birthday of Thomas Hobbes (5 April 1588 to 4 December 1679), and yes, that Hobbes. I was surprised; I knew Hobbes had written Leviathan and was famous for philosophical works that I hadn’t read either. I had no idea that he’d done anything important mathematically, but then, the generic biography for a mathematician of the 16th or 17th century is “philosopher/theologian who advanced mathematics in order to further his astronomical research”, so, it wouldn’t be strange.

The MacTutor History of Mathematics archive’s biography explains that he actually came to discover mathematics relatively late in life. They quote John Aubrey’s A Brief Life Of Thomas Hobbes for a scene that’s just beautiful:

He was forty years old before he looked on geometry; which happened accidentally. Being in a gentleman’s library Euclid’s Elements lay open, and ’twas the forty-seventh proposition in the first book. He read the proposition. “By God,” said he, “this is impossible!” So he reads the demonstration of it, which referred him back to such a proof; which referred him back to another, which he also read. … at last he was demonstratively convinced of that truth. This made him in love with geometry.

And so began a love of mathematics that, if MacTutor is right, lasted the half-century he had left to his life. His mathematics work would not displace his place as a philosopher, but then, his accomplishments …

Well, that’s the less cheerful part of it. For example, says MacTutor, shortly before his 1655 publication of De Corpore (On The Body) Hobbes worked out a method of squaring the circle, using straightedge and compass alone. It’s impossible to do this (though that it is impossible would take two more centuries to prove), and Hobbes realized his demonstration was wrong shortly before publication. Rather than remove the proof he added text to explain that it was a false proof.

False proofs can be solid teaching tools: just working out where a proof goes wrong is a good exercise in testing one’s knowledge of concepts and how they relate, and whether a concept is actually well-defined yet. And it’s not like attempting to square the circle is by itself ridiculous. I suspect most mathematicians even today give it a try, at least before they can study the proofs that it’s impossible and you can go on to trying to do Fermat’s Last Theorem.

But Hobbes also included a second attempted proof which he again realized was at best “an approximate quadrature”, and tried a third which he realized was wrong while the book was being printed, so he added a note that it was meant as a problem for the reader. Hobbes was sure he was close, and would keep on trying to prove he had squared the circle to the end of his life. These circle-squarings set off a long-running feud with John Wallis, a pioneer in algebra and calculus (and the person who introduced the ∞ symbol to mathematics), who attacked Hobbes’s mistakes and faulty claims.

Hobbes also refused to have anything to do with the algebra and calculus and the symbolic operations which were revolutionizing mathematics at the time; he wanted geometry and nothing but. MacTutor quotes him as insulting — and here we’re reminded that the 17th century was a golden age of academic insulting — Wallis’s Algebra as “a scab of symbols [ which disfigured the page ] as if a hen had been scraping there”.

The best that MacTutor can say about Hobbes’s mathematics is that while he claimed to do a lot of truly impressive work, none of the things which would have been substantial advances in mathematics were correct. And there is something sad that a person of great intellectual power could be so in love with mathematics and find that love wasn’t reciprocated. He wrote near the end of his life a list of seven problems “sought in vain by the diligent scrutiny of the greatest geometers since the very beginnings of geometry” that he concluded he’d solved; and, to be kind, he’s not renowned as the person who found the center of gravity of the quadrant of a circle.

But that sadness is taking an unfair view of the value of doing mathematics. So Hobbes spent a half-century playing with plane figures without finding something true that future generations would regard as novel — how is that a failing? How many professional mathematicians will do something that’s of any general interest, and won’t even write a classic on social contract theory that people will think they probably should’ve read at some point? He found in geometry something which brought him a sense of wonder, and which was delightful enough to keep him going through long and bitter academic feuds (I grant it’s possible Hobbes enjoyed the feuds; some people do), and without apparently losing his enthusiasm. That’s wonderful, regardless of whether his work found anything original.

# Can You Be As Clever As Dirac For A Little Bit

I’ve been reading Graham Farmelo’s The Strangest Man: The Hidden Life of Paul Dirac, which is a quite good biography about a really interestingly odd man and important physicist. Among the things mentioned is that at one point Dirac was invited in to one of those number-challenge puzzles that even today sometimes make the rounds of the Internet. This one is to construct whole numbers using exactly four 2′s and the normal, non-exotic operations — addition, subtraction, exponentials, roots, the sort of thing you can learn without having to study calculus. For example:

$1 = \left(2 \div 2\right) \cdot \left(2 \div 2\right)$
$2 = 2 \cdot 2^{\left(2 - 2\right)}$
$3 = 2 + \left(\frac{2}{2}\right)^2$
$4 = 2 + 2 + 2 - 2$

Now these aren’t unique; for example, you could also form 2 by writing $2 \div 2 + 2 \div 2$, or as $2^{\left(2 + 2\right)\div 2}$. But the game is to form as many whole numbers as you can, and to find the highest number you can.

Dirac went to work and, complained his friends, broke the game because he found a formula that can any positive whole number, using exactly four 2′s.

I couldn’t think of it, and had to look to the endnotes to find what it was, but you might be smarter than me, and might have fun playing around with it before giving up and looking in the endnotes yourself. The important things are, it has to produce any positive integer, it has to use exactly four 2′s (although they may be used stupidly, as in the examples I gave above), and it has to use only common arithmetic operators (an ambiguous term, I admit, but, if you can find it on a non-scientific calculator or in a high school algebra textbook outside the chapter warming you up to calculus you’re probably fine). Good luck.

# Reading the Comics, April 1, 2014: Name-Dropping Monkeys Edition

There’s been a little rash of comics that bring up mathematical themes, now, which is ordinarily pretty good news. But when I went back to look at my notes I realized most of them are pretty much name-drops, mentioning stuff that’s mathematical without giving me much to expand upon. The exceptions are what might well be the greatest gift which early 20th century probability could give humor writers. That’s enough for me.

Mark Anderson’s Andertoons (March 27) plays on the double meaning of “fifth” as representing a term in a sequence and as representing a reciprocal fraction. It also makes me realize that I hadn’t paid attention to the fact that English (at least) lets you get away with using the ordinal number for the part fraction, at least apart from “first” and “second”. I can make some guesses about why English allows that, but would like to avoid unnecessarily creating folk etymologies.

Hector D Cantu and Carlos Castellanos’s Baldo (March 27) has Baldo not do as well as he expected in predictive analytics, which I suppose doesn’t explicitly require mathematics, but would be rather hard to do without. Making predictions is one of mathematics’s great applications, and drives much mathematical work, in the extrapolation of curves and the solving of differential equations most obviously.

Dave Whamond’s Reality Check (March 27) name-drops the New Math, in the service of the increasingly popular sayings that suggest Baby Boomers aren’t quite as old as they actually are.

Rick Stromoski’s Soup To Nutz (March 29) name-drops the metric system, as Royboy notices his ten fingers and ten toes and concludes that he is indeed metric. The metric system is built around base ten, of course, and the idea that changing units should be as easy as multiplying and dividing by powers of ten, and powers of ten are easy to multiply and divide by because we use base ten for ordinary calculations. And why do we use base ten? Almost certainly because most people have ten fingers and ten toes, and it’s so easy to make the connection between counting fingers, counting objects, and then to the abstract idea of counting. There are cultures that used other numerical bases; for example, the Maya used base 20, but it’s hard not to notice that that’s just using fingers and toes together.

Greg Cravens’s The Buckets (March 30) brings out a perennial mathematics topic, the infinite monkeys. Here Toby figures he could be the greatest playwright by simply getting infinite monkeys and typewriters to match, letting them work, and harvesting the best results. He hopes that he doesn’t have to buy many of them, to spoil the joke, but the remarkable thing about the infinite monkeys problem is that you don’t actually need that many monkeys. You’ll get the same result — that, eventually, all the works of Shakespeare will be typed — with one monkey or with a million or with infinitely many monkeys; with fewer monkeys you just have to wait longer to expect success. Tim Rickard’s Brewster Rockit (April 1) manages with a mere hundred monkeys, although he doesn’t reach Shakespearean levels.

But making do with fewer monkeys is a surprisingly common tradeoff in random processes. You can often get the same results with many agents running for a shorter while, or a few agents running for a longer while. Processes that allow you to do this are called “ergodic”, and being able to prove that a process is ergodic is good news because it means a complicated system can be represented with a simple one. Unfortunately it’s often difficult to prove that something is ergodic, so you might instead just warn that you are assuming the ergodic hypothesis or ergodicity, and if nothing else you can probably get a good fight going about the validity of “ergodicity” next time you play Scrabble or Boggle.

# The Math Blog Statistics, March 2014

It’s the start of a fresh month, so let me carry on my blog statistics reporting. In February 2014, apparently, there were a mere 423 pages viewed around here, with 209 unique visitors. That’s increased a bit, to 453 views from 257 visitors, my second-highest number of views since last June and second-highest number of visitors since last April. I can make that depressing, though: it means views per visitor dropped from 2.02 to 1.76, but then, they were at 1.76 in January anyway. And I reached my 14,000th page view, which is fun, but I’d need an extraordinary bit of luck to get to 15,000 this month.

March’s most popular articles were a mix of the evergreens — trapezoids and comics — with a bit of talk about March Madness serving as obviously successful clickbait:

1. How Many Trapezoids I Can Draw, and again, nobody’s found one I overlooked.
2. Calculating March Madness, and the tricky problem of figuring out the chance of getting a perfect bracket.
3. Reading The Comics, March 1, 2014: Isn’t It One-Half X Squared Plus C? Edition, showing how well an alleged joke will make comic strips popular.
4. Reading The Comics, March 26, 2014: Kitchen Science Department, showing that maybe it’s just naming the comics installments that matters.
5. What Are The Chances Of An Upset, which introduces some of the interesting quirks of the bracket and seed system of playoffs, such as the apparent advantage an eleventh seed has over an eighth seed.

There’s a familiar set of countries sending me the most readers: as ever the United States up top (277), with Denmark in second (26) and Canada in third (17). That’s almost a tie, though, as the United Kingdom (16), Austria (15), and the Philippines (13) could have taken third easily. I don’t want to explicitly encourage international rivalries to drive up my page count here, I’m just pointing it out. Singapore is in range too. The single-visitor countries this past month were the Bahamas, Belgium, Brazil, Colombia, Hungary, Mexico, Peru, Rwanda, Saudi Arabia, Spain, Sri Lanka, Sweden, Syria, and Taiwan. Hungary, Peru, and Saudi Arabia are the only repeat visitors from February, and nobody’s got a three-month streak going.

There wasn’t any good search-term poetry this month; mostly it was questions about trapezoids, but there were a couple interesting ones:

So, that’s where things stand: I need to get back to writing about trapezoids and comic strips.

# Realistic Modeling

“Economic Realism (Wonkish)”, a blog entry by Paul Krugman in The New York Times, discusses a paper, “Chameleons: The Misuse Of Mathematical Models In Finance And Economics”, by Paul Pfleiderer of Stanford University, which surprises me by including a color picture of a chameleon right there on the front page, and in an academic paper at that, and I didn’t know you could have color pictures included just for their visual appeal in academia these days. Anyway, Pfleiderer discusses the difficulty of what they term filtering, making sure that the assumptions one makes to build a model — which are simplifications and abstractions of the real-world thing in which you’re interested — aren’t too far out of line with the way the real thing behaves.

This challenge, which I think of as verification or validation, is important when you deal with pure mathematical or physical models. Some of that will be at the theoretical stage: is it realistic to model a fluid as if it had no viscosity? Unless you’re dealing with superfluid helium or something exotic like that, no, but you can do very good work that isn’t too far off. Or there’s a classic model of the way magnetism forms, known as the Ising model, which in a very special case — a one-dimensional line — is simple enough that a high school student could solve it. (Well, a very smart high school student, one who’s run across an exotic function called the hyperbolic cosine, could do it.) But that model is so simple that it can’t model the phase change, that, if you warm a magnet up past a critical temperature it stops being magnetic. Is the model no good? If you aren’t interested in the phase change, it might be.

And then there is the numerical stage: if you’ve set up a computer program that is supposed to represent fluid flow, does it correctly find solutions? I’ve heard it claimed that the majority of time spent on a numerical project is spent in validating the results, and that isn’t even simply in finding and fixing bugs in the code. Even once the code is doing perfectly what we mean it to do, it must be checked that what we mean it to do is relevant to what we want to know.

Pfleiderer’s is an interesting paper and I think worth the read; despite its financial mathematics focus (and a brief chat about quantum mechanics) it doesn’t require any particularly specialized training. There’s some discussions of particular financial models, but what’s important are the assumptions being made behind those models, and those are intelligible without prior training in the field.

# Reading the Comics, March 26, 2014: Kitchen Science Department

It turns out that three of the comic strips to be included in this roundup of mathematics-themed strips mentioned things that could reasonably be found in kitchens, so that’s why I’ve added that as a subtitle. I can’t figure a way to contort the other entries to being things that might be in kitchens, but, given that I don’t get to decide what cartoonists write about I think I’m doing well to find any running themes.

Ralph Hagen’s The Barn (March 19) is built around a possibly accurate bit of trivia which tries to stagger the mind by considering the numinous: how many stars are there? This evokes, to me at least, one of the famous bits of ancient Greek calculations (for which they get much less attention than the geometers and logicians did), as Archimedes made an effort to estimate how many grains of sand could fit inside the universe. Archimedes had apparently little fear of enormous numbers, and had to strain the Greek system for representing numbers to get at such enormous quantities. But he was an ingenious reasoner: he was able to estimate, for example, the sizes and distances to the Moon and the Sun based on observing, with the naked eye, the half-moon; and his work on problems like finding the value of pi get surprisingly close to integral calculus and would probably be a better introduction to the subject than pre-calculus courses are. It’s quite easy in considering how big (and how old) the universe is to get to numbers that are really difficult to envision, so, trying to reduce that by imagining stars as grains of salt might help, if you can imagine a ball of salt eight miles across.

# What Are The Chances Of An Upset?

I’d wondered idly the other day if a number-16 seed had ever lost to a number-one seed in the NCAA Men’s Basketball tournament. This finally made me go and actually try looking it up; a page on statistics.about.com has what it claims are the first-round results from 1985 (when the current 64-team format was adopted) to 2012. This lets us work out roughly the probability of, for example, the number-three seed beating the number-14, at least by what’s termed the “frequentist” interpretation of probability. In that interpretation, the probability of something happening is roughly how many times the thing you’re interested in happens for the number of times it could happen. From 1985 to 2012 each of the various first-round possibilites was played 112 times (28 tournaments with four divisions each); if we make some plausible assumptions about games being independent events (how one seed did last year doesn’t affect how it does this year), we should have a decent rough idea of the probability of each seed winning.

According to its statistics, and remarkable to me, is that apparently the number-one seed has never been beaten by the number-16. I’m surprised; I’d have guessed the bottom team had at least a one percent chance of victory. I’m also surprised that the Internet seems to have only the one page that’s gathered explicitly how often the first rounds go to the various seeds, although perhaps I’m just not searching for the right terms.

From http://bracketodds.cs.illinois.edu I learn that Dr Sheldon Jacobson and Dr Douglas M King of the University of Illinois (Urbana) published an interesting paper “Seeding In The NCAA Men’s Basketball Tournament: When is A Higher Seed Better?” which runs a variety of statistical tests on the outcomes of March Madness tournaments and finds that the seeding does seem to correspond to the stronger team in the first few rounds, but that after the Elite Eight round there’s not the evidence that a higher seed is more likely to win than the lower; effectively, after the first few rounds you might as well make a random pick.

Jacobson and King, along with Dr Alexander Nikolaev at SUNY/Buffalo and Dr Adrian J Lee, Central Illinois Technology and Education Research Institute, also wrote “Seed Distributions for the NCAA Men’s Basketball Tournament” which tries to model the tournament’s outcomes as random variables, and compares how these random-variable projections compare to what actually happened between 1985 and 2010. This includes some interesting projections about how often we might expect the various seeds to appear in the Sweet Sixteen, Elite Eight, or Final Four. It brings out some surprises — which make sense when you look back at the brackets — such as that the number-eight or number-nine seed has a worse chance of getting to the Sweet Sixteen than the eleventh- or twelfth-seed does.

(The eighth or ninth seed, if they win, have to play whoever wins the sixteen-versus-one contest, which will be the number-one seed. The eleventh seed has to beat first the number-six seed, and then either the number-three or the number-14 seed, either one of which is more likely.)

Meanwhile, it turns out that in my brackets I had picked Connecticut to beat Villanova, which has me doing well in my group — we get bonus points for calling upsets — apart from the accusations of witchcraft.