## My 2018 Mathematics A To Z: Sorites Paradox

Today’s topic is the lone (so far) request by bunnydoe, so I’m under pressure to make it decent. If she or anyone else would like to nominate subjects for the letters U through Z, please drop me a note at this post. I keep fooling myself into thinking I’ll get one done in under 1200 words.

This is a story which makes a capitalist look kind of good. I say nothing about its truth, or even, at this remove, where I got it. The story as I heard it was about Ray Kroc, who made McDonald’s into a thing people of every land can complain about. The story has him demonstrate skepticism about the use of business consultants. A consultant might find, for example, that each sesame-seed hamburger bun has (say) 43 seeds. And that if they just cut it down to 41 seeds then each franchise would save (say) \$50,000 annually. And no customer would notice the difference. Fine; trim the seeds a little. The next round of consultant would point out, cutting from 41 seeds to 38 would save a further \$65,000 per store per year. And again no customer would notice the difference. Cut to 36 seeds? No customer would notice. This process would end when each bun had three sesame seeds, and the customers notice.

Part of the paradox’s intractability must be that it’s so nearly induction. Induction is a fantastic tool for mathematical problems. We couldn’t do without it. But consider the argument. If a bun is unsatisfying, one more seed won’t make it satisfying. A bun with one seed is unsatisfying. Therefore all buns have an unsatisfying number of sesame seeds on them. It suggests there must be some point at which “adding one more seed won’t help” stops being true. Fine; where is that point, and why isn’t it one fewer or one more seed?

A certain kind of nerd has a snappy answer for the Sorites Paradox. Test a broad population on a variety of sesame-seed buns. There’ll be some so sparse that nearly everyone will say they’re unsatisfying. There’ll be some so abundant most everyone agrees they’re great. So there’s the buns most everyone says are fine. There’s the buns most everyone says are not. The dividing line is at any point between the sparsest that satisfy most people and the most abundant that don’t. The nerds then declare the problem solved and go off. Let them go. We were lucky to get as much of their time as we did. They’re quite busy solving what “really” happened for Rashomon. The approach of “set a line somewhere” is fine if all want is guidance on where to draw a line. It doesn’t help say why we can anoint some border over any other. At least when we use a river as border between states we can agree going into the water disrupts what we were doing with the land. And even then we have to ask what happens during droughts and floods, and if the river is an estuary, how tides affect matters.

We might see an answer by thinking more seriously about these sesame-seed buns. We force a problem by declaring that every bun is either satisfying or it is not. We can imagine buns with enough seeds that we don’t feel cheated by them, but that we also don’t feel satisfied by. This reflects one of the common assumptions of logic. Mathematicians know it as the Law of the Excluded Middle. A thing is true or it is not true. There is no middle case. This is fine for logic. But for everyday words?

It doesn’t work when considering sesame-seed buns. I can imagine a bun that is not satisfying, but also is not unsatisfying. Surely we can make some logical provision for the concept of “meh”. Now we need not draw some arbitrary line between “satisfying” and “unsatisfying”. We must draw two lines, one of them between “unsatisfying” and “meh”. There is a potential here for regression. Also for the thought of a bun that’s “satisfying-meh-satisfying by unsatisfying”. I shall step away from this concept.

But there are more subtle ways to not exclude the middle. For example, we might decide a statement’s truth exists on a spectrum. We can match how true a statement is to a number. Suppose an obvious falsehood is zero; an unimpeachable truth is one, and normal mortal statements somewhere in the middle. “This bun with a single sesame seed is satisfying” might have a truth of 0.01. This perhaps reflects the tastes of people who say they want sesame seeds but don’t actually care. “This bun with fifteen sesame seeds is satisfying” might have a truth of 0.25, say. “This bun with forty sesame seeds is satisfying” might have a truth of 0.97. (It’s true for everyone except those who remember the flush times of the 43-seed bun.) This seems to capture the idea that nothing is always wholly anything. But we can still step into absurdity. Suppose “this bun with 23 sesame seeds is satisfying” has a truth of 0.50. Then “this bun with 23 sesame seeds is not satisfying” should also have a truth of 0.50. What do we make of the statement “this bun with 23 sesame seeds is simultaneously satisfying and not satisfying”? Do we make something different to “this bun with 23 sesame seeds is simultaneously satisfying and satisfying”?

I see you getting tired in the back there. This may seem like word games. And we all know that human words are imprecise concepts. What has this to do with logic, or mathematics, or anything but the philosophy of language? And the first answer is that we understand logic and mathematics through language. When learning mathematics we get presented with definitions that seem absolute and indisputable. We start to see the human influence in mathematics when we ask why 1 is not a prime number. Later we see things like arguments about whether a ring has a multiplicative identity. And then there are more esoteric debates about the bounds of mathematical concepts.

Perhaps we can think of a concept we can’t describe in words. If we don’t express it to other people, the concept dies with us. We need words. No, putting it in symbols does not help. Mathematical symbols may look like slightly alien scrawl. But they are shorthand for words, and can be read as sentences, and there is this fuzziness in all of them.

And we find mathematical properties that share this problem. Consider: what is the color of the chemical element flerovium? Before you say I just made that up, flerovium was first synthesized in 1998, and officially named in 2012. We’d guess that it’s a silvery-white or maybe grey metallic thing. Humanity has only ever observed about ninety atoms of the stuff. It’s, for atoms this big, amazingly stable. We know an isotope of it that has a half-life of two and a half seconds. But it’s hard to believe we’ll ever have enough of the stuff to look at it and say what color it is.

That’s … all right, though? Maybe? Because we know the quantum mechanics that seem to describe how atoms form. And how they should pack together. And how light should be absorbed, and how light should be emitted, and how light should be scattered by it. At least in principle. The exact answers might be beyond us. But we can imagine having a solution, at least in principle. We can imagine the computer that after great diligent work gives us a picture of what a ten-ton lump of flerovium would look like.

So where does its color come from? Or any of the other properties that these atoms have as a group? No one atom has a color. No one atom has a density, either, or a viscosity. No one atom has a temperature, or a surface tension, or a boiling point. In combination, though, they have.

These are known to statistical mechanics, and through that thermodynamics, as intensive properties. If we have a partition function, which describes all the ways a system can be organized, we can extract information about these properties. They turn up as derivatives with respect to the right parameters of the system.

But the same problem exists. Take a homogeneous gas. It has some temperature. Divide it into two equal portions. Both sides have the same temperature. Divide each half into two equal portions again. All four pieces have the same temperature. Divide again, and again, and a few more times. You eventually get containers with so little gas in them they don’t have a temperature. Where did it go? When did it disappear?

The counterpart to an intensive property is an extensive one. This is stuff like the mass or the volume or the energy of a thing. Cut the gas’s container in two, and each has half the volume. Cut it in half again, and each of the four containers has one-quarter the volume. Keep this up and you stay in uncontroversial territory, because I am not discussing Zeno’s Paradoxes here.

And like Zeno’s Paradoxes, the Sorites Paradox can seem at first trivial. We can distinguish a heap from a non-heap; who cares where the dividing line is? Or whether the division is a gradual change? It seems easy. To show why it is easy is hard. Each potential answer is interesting, and plausible, and when you think hard enough of it, not quite satisfying. Good material to think about.

I hope to find some material think about the letter ‘T’ and have it published Friday. It’ll be available at this link, as are the rest of these glossary posts.

So here’s a couple things I haven’t had the time to read and think about, but that I want someone to, possibly even me. First, a chain reference:

Paulos’s link in that URL was mistaken and in one of the responses to it he posted a correction. But it’s about this:

And ultimately about what seems a ridiculously impossible condition. Suppose that you have two games, both of which you expect to lose. Or two strategies to play a game, both of which you expect will lose. How do you apply them so that you maximize your chance of winning? Indeed, under the right circumstances, how can you have a better than 50% chance of winning? I have actually read this, but what I haven’t had is the chance to think about it. It may come in handy for pinball league though.

Here, MikesMathPage posts A simplified version of the Banach-Tarski paradox for kids. The Banach-Tarski paradox is one of those things I’m surprised isn’t more common in pop mathematics. It offers this wondrous and absolutely anti-intuitive consequence. Take a sphere the size of a golf ball. Slice it perfectly, using mathematically precise tools that could subdivide atoms, that is, more perfectly than mere matter could ever do. Cut it into pieces and take them apart. Then reassemble the pieces. You have two spheres, and they’re both the size of a planet. You can see why when you get this as a mathematics major the instinct is the say you’ve heard something wrong. There being as many rationals as whole numbers, sure. There being more irratonal numbers than rationals, that’s fine. There being as many points in a one-segment line segment as in an infinitely large ten-dimensional volume of space? Shaky but all right. But this? This? Still, you can kind of imagine that well, maybe there’s some weird thing where you make infinitely many cuts into uncountably infinitely many pieces and then you find out you just need five slices. Four, if you don’t use the point at the very center of the golf ball. Then you get cranky. Anyway the promise of the title, forming a version of this that kids will be comfortable with, is a big one.

This one I’m pretty sure I ended up from by way of Analysis Fact of the day. John D Cook’s Cover time of a graph: cliques, chains, and lollipops is about graphs. I mean graph theory graphs, which look kind of like those circuit-board mass transit diagrams. All dots and lines connecting them. Cook’s question: how long does it take to visit every point in one of these graphs, if you take a random walk? That is, each time you’re at a stop, you take one of the paths randomly? With equal chance of taking any of the paths connected there? There’s some obviously interesting shapes and Cook looks into how you walk over them.

That should do for now. I really need to get caught up on my reading. Please let me know if I’ve made a disastrous mistake with any of this.

## Reading the Comics, July 22, 2013

This is a shorter than usual entry for my roundup of comic strips mentioning mathematical topics, because I anticipate being a bit too busy to present this later in the week.

Ruben Boiling’s Tom the Dancing Bug (July 12) features one of his irresistible (to me) “Super-Fun-Pak Comix”, among them, A Voice From Another Dimension, which is a neat bit of Flatland-inspired fun between points in space. Edwin Abbot Abbot’s Flatland is one of those rare advanced-mathematical concepts that got firmly lodged into the pop culture, probably because it is a supremely accessible introduction to the concept of multidimensional space. People love learning about things which go against their everyday intuition, and the book (eventually) made the new notions of general relativity feel like they could be understood by anyone.