So then this happened: Comic Strip Master Command didn’t have much they wanted me to write about this week. I made out three strips as being relevant enough to discuss at all. And even they don’t have topics that I felt I could really dig into. Coincidence, surely, although I like to think they were trying to help me get ahead of deadline on my A To Z essays for this last week of the run. It’s a noble thought, but doomed. I haven’t been more than one essay ahead of deadline the last three months. I know in past years I’ve gotten three or even four essays ahead of time and I don’t know why it hasn’t worked this time. I am going ahead and blaming that this these essays have been way longer than previous years’. So anyway, I thank Comic Strip Master Command for trying to make my Monday and my Thursday this week be less packed. It won’t help.
Darrin Bell and Theron Heir’s Rudy Park for the 10th uses mathematics as shorthand for a deep, thought-out theory of something. In this case, Randy’s theory of how to interest women. (He has rather a large number of romantic events around him.) It’s easy to suppose that people can be modeled mathematically. Even a crude model, one supposing that people have things they like and dislike, can give us good interesting results. This gets into psychology and sociology though. And probably requires computer modeling to get slightly useful results.
Randy’s blackboard has a good number of legitimate equations on it. They’re maybe not so useful to his problem of modeling people, though. The lower left corner, for example, are three of Maxwell’s Equations, describing electromagnetism. I’m not sure about all of these, in part because I think some might be transcribed incorrectly. The second equation in the upper left, for example, looks like it’s getting at the curl of a conserved force field being zero, but it’s idiosyncratic to write that with a ‘d’ to start with. The symbols all over the right with both subscripts and superscripts look to me like tensor work. This turns up in electromagnetism, certainly. Tensors turn up anytime something, such as electrical conductivity, is different in different directions. But I’ve never worked deeply in those fields so all I can confidently say is that they look like they parse.
Lincoln Pierce’s Big Nate for the 14th is part of a bit where Nate’s trying to write a gruesome detective mystery for kids. I’m not sure that’s a ridiculous idea, at least if the gore could be done at a level that wouldn’t be too visceral. Anyway, Nate has here got the idea of merging some educational value into the whole affair. It’s not presented as a story problem, just as characters explaining stuff to one another. There probably would be some room for an actual problem where Barky and Winky wanted to know something and had to work out how to find it from what they knew, though.
And since it was a low-volume week, let me mention strips I didn’t decide fit. Ray Kassinger asked about Tim Rickard’s Brewster Rockit for the 12th. Might it be a play on Schrödinger’s Cat, the famous thought-experiment about how to understand the mathematics of quantum mechanics? It’s possible, but I think it’s more likely just that cats like sitting in boxes. Thaves’s Frank and Ernest for the 13th looks like it should be an anthropomorphic numerals joke. But it’s playing on the idiom about three being a crowd, and the whole of the mathematical content is that three is a number. John Zakour and Scott Roberts’s Maria’s Day for the 15th mentions mathematics. Particularly, Maria wishing they weren’t studying it. It’s a cameo appearance; it could be any subject whose value a student doesn’t see. That’s all I can make of it.
That there were twelve comic strips making my cut as mention-worthy this week should have let me do three essays of four comics each. But the desire to include all the comics from the same day in one essay leaves me one short here. So be it. Three of the four cartoonists featured here have a name of Sansom or Samson, so, that’s an edition title for you. No, Sam and Silo do not appear here.
Art Sansom and Chip Sansom’s Born Loser for the 6th uses arithmetic as a test of deference. Will someone deny a true thing in order to demonstrate loyalty? Arithmetic is full of things that are inarguably true. If we take the ordinary meanings of one, plus, equals, and three, it can’t be that one plus one equals three. Most fields of human endeavor are vulnerable to personal taste, or can get lost in definitions and technicalities. Or the advance of knowledge: my love and I were talking last night how we remembered hearing, as kids, the trivia that panda bears were not really bears, but a kind of raccoon. (Genetic evidence has us now put giant pandas with the bears, and red pandas as part of the same superfamily as raccoons, but barely.) Or even be subject to sarcasm. Arithmetic has a harder time of that. Mathematical ideas do evolve in time, certainly. But basic arithmetic is pretty stable. Logic is also a reliable source of things we can be confident are true. But arithmetic is more familiar than most logical propositions.
Samson’s Dark Side of the Horse for the 8th is the Roman Numerals joke for the week. It’s also a bit of a wordplay joke, although the music wordplay rather tha mathematics. Me, I still haven’t heard a clear reason why ‘MIC’ wouldn’t be a legitimate Roman numeral representation of 1099. I’m not sure whether ‘MIC’ would step on or augment the final joke, though.
Pab Sungenis’s New Adventures of Queen Victoria for the 8th has a comedia dell’arte-based structure for its joke. (The strip does that, now and then.) The comic uses a story problem, with the calculated answer rejected for the nonsense it would be. I suppose it must be possible for someone to eat eighty apples over a long enough time that it’s not distressing, and yet another twenty apples wouldn’t spoil. I wouldn’t try it, though.
The Extreme Value Theorem, which I chose to write about, is a fundamental bit of analysis. There is also a similarly-named but completely unrelated Extreme Value Theory. This exists in the world of statistics. That’s about outliers, and about how likely it is you’ll find an even more extreme outlier if you continue sampling. This is valuable in risk assessment: put another way, it’s the question of what neighborhoods you expect to flood based on how the river’s overflowed the last hundred years. Or be in a wildfire, or be hit by a major earthquake, or whatever. The more I think about it the more I realize that’s worth discussing too. Maybe in the new year, if I decide to do some A To Z extras.
And then there are theorems that seem the opposite. Ones that seem so obvious, and so obviously true, that they hardly seem like mathematics. If they’re not axioms, they might as well be. The extreme value theorem is one of these.
It’s a theorem about functions. Here, functions that have a domain and a range that are both real numbers. Even more specifically, about continuous functions. “Continuous” is a tricky idea to make precise, but we don’t have to do it. A century of mathematicians worked out meanings that correspond pretty well to what you’d imagine it should mean. It means you can draw a graph representing the function without lifting the pen. (Do not attempt to use this definition at your thesis defense. I’m skipping what a century’s worth of hard thinking about the subject.)
And it’s a theorem about “extreme” values. “Extreme” is a convenient word. It means “maximum or minimum”. We’re often interested in the greatest or least value of a function. Having a scheme to find the maximum is as good as having one to find a minimum. So there’s little point talking about them as separate things. But that forces us to use a bunch of syllables. Or to adopt a convention that “by maximum we always mean maximum or minimum”. We could say we mean that, but I’ll bet a good number of mathematicians, and 95% of mathematics students, would forget the “or minimum” within ten minutes. “Extreme”, then. It’s short and punchy and doesn’t commit us to a maximum or a minimum. It’s simply the most outstanding value we can find.
The Extreme Value Theorem doesn’t help us find them. It only proves to us there is an extreme to find. Particularly, it says that if a continuous function has a domain that’s a closed interval, then it has to have a maximum and a minimum. And it has to attain the maximum and the minimum at least once each. That is, something in the domain matches to the maximum. And something in the domain matches to the minimum. Could be multiple times, yes.
This might not seem like much of a theorem. Existence proofs rarely do. It’s a bias, I suppose. We like to think we’re out looking for solutions. So we suppose there’s a solution to find. Checking that there is an answer before we start looking? That seems excessive. Before heading to the airport we might check the flight wasn’t delayed. But we almost never check that there is still a Newark to fly to. I’m not sure, in working out problems, that we check it explicitly. We decide early on that we’re working with continuous functions and so we can try out the usual approaches. That we use the theorem becomes invisible.
And that’s sort of the history of this theorem. The Extreme Value Theorem, for example, is part of how we now prove Rolle’s Theorem. Rolle’s theorem is about functions continuous and differentiable on the interval from a to b. And functions that have the same value for a and for b. The conclusion is the function hass got a local maximum or minimum in-between these. It’s the theorem depicted in that xkcd comic you maybe didn’t check out a few paragraphs ago. Rolle’s Theorem is named for Michael Rolle, who prove the theorem (for polynomials) in 1691. The Indian mathematician Bhaskara II, in the 12th century, is credited with stating the theorem too. The Extreme Value Theorem was proven around 1860. (There was an earlier proof, by Bernard Bolzano, whose name you’ll find all over talk about limits and functions and continuity and all. But that was unpublished until 1930. The proofs known about at the time were done by Karl Weierstrass. His is the other name you’ll find all over talk about limits and functions and continuity and all. Go on, now, guess who it was proved the Extreme Value Theorem. And guess what theorem, bearing the name of two important 19th-century mathematicians, is at the core of proving that. You need at most two chances!) That is, mathematicians were comfortable using the theorem before it had a clear identity.
Once you know that it’s there, though, the Extreme Value Theorem’s a great one. It’s useful. Rolle’s Theorem I just went through. There’s also the quite similar Mean Value Theorem. This one is about functions continuous and differentiable on an interval. It tells us there’s at least one point where the derivative is equal to the mean slope of the function on that interval. This is another theorem that’s a quick proof once you have the Extreme Value Theorem. Or we can get more esoteric. There’s a technique known as Lagrange Multipliers. It’s a way to find where on a constrained surface a function is at its maximum or minimum. It’s a clever technique, one that I needed time to accept as a thing that could possibly work. And why should it work? Go ahead, guess what the centerpiece of at least one method of proving it is.
Step back from calculus and into real analysis. That’s the study of why calculus works, and how real numbers work. The Extreme Value Theorem turns up again and again. Like, one technique for defining the integral itself is to approximate a function with a “stepwise” function. This is one that looks like a pixellated, rectangular approximation of the function. The definition depends on having a stepwise rectangular approximation that’s as close as you can get to a function while always staying less than it. And another stepwise rectangular approximation that’s as close as you can get while always staying greater than it.
And then other results. Often in real analysis we want to know about whether sets are closed and bounded. The Extreme Value Theorem has a neat corollary. Start with a continuous function with domain that’s a closed and bounded interval. Then, this theorem demonstrates, the range is also a closed and bounded interval. I know this sounds like a technical point. But it is the sort of technical point that makes life easier.
The Extreme Value Theorem even takes on meaning when we don’t look at real numbers. We can rewrite it in topological spaces. These are sets of points for which we have an idea of a “neighborhood” of points. We don’t demand that we know what distance is exactly, though. What had been a closed and bounded interval becomes a mathematical construct called a “compact set”. The idea of a continuous function changes into one about the image of an open set being another open set. And there is still something recognizably the Extreme Value Theorem. It tells us about things called the supremum and infimum, which are slightly different from the maximum and minimum. Just enough to confuse the student taking real analysis the first time through.
Topological spaces are an abstracted concept. Real numbers are topological spaces, yes. But many other things also are. Neighborhoods and compact sets and open sets are also abstracted concepts. And so this theorem has its same quiet utility in these many spaces. It’s just there quietly supporting more challenging work.
And then I noticed there were a bunch of comic strips with some kind of mathematical theme on the same day. Always fun when that happens.
Bill Holbrook’s On The Fastrack uses one of Holbrook’s common motifs. That’s the depicting as literal some common metaphor. in this case it’s “massaging the numbers”, which might seem not strictly mathematics. But while numbers are interesting, they’re also useful. To be useful they must connect to something we want to know. They need context. That context is always something of human judgement. If the context seems inappropriate to the listener, she thinks the presenter is massaging the numbers. If the context seems fine, we trust the numbers as showing something truth.
Scott Hilburn’s The Argyle Sweater is a seasonal pun that couldn’t wait for a day closer to Christmas. I’m a little curious why not. It would be the same joke with any subject, certainly. The strip did make me wonder if Ebeneezer Scrooge, in-universe, might have taken calculus. This lead me to see that it’s a bit vague what, precisely, Scrooge, or Scrooge-and-Marley, did. The movies are glad to position him as having a warehouse, and importing and exporting things, and making and collecting on loans and whatnot. These are all trades that mathematicians would like to think benefit from knowing advanced mathematics. The logic of making loans implies attention be paid to compounding interest, risks, and expectation values, as well as projecting cash-flow a fair bit into the future. But in the original text he doesn’t make any stated loans, and the only warehouse anyone enters is Fezziwig’s. Well, the Scrooge and Marley sign stands “above the warehouse door”, but we only ever go in to the counting-house. And yes, what Scrooge does besides gather money and misery is irrelevant to the setting of the story.
Teresa Burritt’s Dadaist strip Frog Applause uses knowledge of mathematics as an emblem of intelligence. “Multivariate analysis” is a term of art from statistics. It’s about measuring how one variable changes depending on two or more other variables. The goal is obvious: we know there are many things that influence anything of interest. Can we find what things have the strongest effects? The weakest effects? There are several ways we might mean “strongest” effect, too. It might mean that a small change in the independent variable produces a big change in the dependent one. Or it might mean that there’s very little noise, that a change in the independent variable produces a reliable change in the dependent one. Or we might have several variables that are difficult to measure precisely on their own, but with a combination that’s noticeable. The basic calculations for this look a lot like those for single-variable analysis. But there’s much more calculation. It’s more tedious, at least. My reading suggests that multivariate analysis didn’t develop much until there were computers cheap enough to do the calculations. Might be coincidence, though. Many machine-learning techniques can be described as multivariate analysis problems.
Greg Evans’s Luann Againn is a Pi Day joke from before the time when Pi Day was a thing. Brad’s magazine flipping like that is an unusual bit of throwaway background humor for the comic strip.
Nobody had a suggested topic starting with ‘W’ for me! So I’ll take that as a free choice, and get lightly autobiogrpahical.
Witch of Agnesi.
I know I encountered the Witch of Agnesi while in middle school. Eighth grade, if I’m not mistaken. It was a footnote in a textbook. I don’t remember much of the textbook. What I mostly remember of the course was how much I did not fit with the teacher. The only relief from boredom that year was the month we had a substitute and the occasional interesting footnote.
It was in a chapter about graphing equations. That is, finding curves whose points have coordinates that satisfy some equation. In a bit of relief from lines and parabolas the footnote offered this:
In a weird tantalizing moment the footnote didn’t offer a picture. Or say what an ‘a’ was doing in there. In retrospect I recognize ‘a’ as a parameter, and that different values of it give different but related shapes. No hint what the ‘8’ or the ‘4’ were doing there. Nor why ‘a’ gets raised to the third power in the numerator or the second in the denominator. I did my best with the tools I had at the time. Picked a nice easy boring ‘a’. Picked out values of ‘x’ and found the corresponding ‘y’ which made the equation true, and tried connecting the dots. The result didn’t look anything like a witch. Nor a witch’s hat.
It was one of a handful of biographical notes in the book. These were a little attempt to add some historical context to mathematics. It wasn’t much. But it was an attempt to show that mathematics came from people. Including, here, from Maria Gaëtana Agnesi. She was, I’m certain, the only woman mentioned in the textbook I’ve otherwise completely forgotten.
We have few names of ancient mathematicians. Those we have are often compilers like Euclid whose fame obliterated the people whose work they explained. Or they’re like Pythagoras, credited with discoveries by people who obliterated their own identities. In later times we have the mathematics done by, mostly, people whose social positions gave them time to write mathematics results. So we see centuries where every mathematician is doing it as their side hustle to being a priest or lawyer or physician or combination of these. Women don’t get the chance to stand out here.
Today of course we can name many women who did, and do, mathematics. We can name Emmy Noether, Ada Lovelace, and Marie-Sophie Germain. Challenged to do a bit more, we can offer Florence Nightingale and Sofia Kovalevskaya. Well, and also Grace Hopper and Margaret Hamilton if we decide computer scientists count. Katherine Johnson looks likely to make that cut. But in any case none of these people are known for work understandable in a pre-algebra textbook. This must be why Agnesi earned a place in this book. She’s among the earliest women we can specifically credit with doing noteworthy mathematics. (Also physics, but that’s off point for me.) Her curve might be a little advanced for that textbook’s intended audience. But it’s not far off, and pondering questions like “why ? Why not ?” is more pleasant, to a certain personality, than pondering what a directrix might be and why we might use one.
The equation might be a lousy way to visualize the curve described. The curve is one of that group of interesting shapes you get by constructions. That is, following some novel process. Constructions are fun. They’re almost a craft project.
For this we start with a circle. And two parallel tangent lines. Without loss of generality, suppose they’re horizontal, so, there’s lines at the top and the bottom of the curve.
Take one of the two tangent points. Again without loss of generality, let’s say the bottom one. Draw a line from that point over to the other line. Anywhere on the other line. There’s a point where the line you drew intersects the circle. There’s another point where it intersects the other parallel line. We’ll find a new point by combining pieces of these two points. The point is on the same horizontal as wherever your line intersects the circle. It’s on the same vertical as wherever your line intersects the other parallel line. This point is on the Witch of Agnesi curve.
Now draw another line. Again, starting from the lower tangent point and going up to the other parallel line. Again it intersects the circle somewhere. This gives another point on the Witch of Agnesi curve. Draw another line. Another intersection with the circle, another intersection with the opposite parallel line. Another point on the Witch of Agnesi curve. And so on. Keep doing this. When you’ve drawn all the lines that reach from the tangent point to the other line, you’ll have generated the full Witch of Agnesi curve. This takes more work than writing out , yes. But it’s more fun. It makes for neat animations. And I think it prepares us to expect the shape of the curve.
It’s a neat curve. Between it and the lower parallel line is an area four times that of the circle that generated it. The shape is one we would get from looking at the derivative of the arctangent. So there’s some reasons someone working in calculus might find it interesting. And people did. Pierre de Fermat studied it, and found this area. Isaac Newton and Luigi Guido Grandi studied the shape, using this circle-and-parallel-lines construction. Maria Agnesi’s name attached to it after she published a calculus textbook which examined this curve. She showed, according to people who present themselves as having read her book, the curve and how to find it. And she showed its equation and found the vertex and asymptote line and the inflection points. The inflection points, here, are where the curve chances from being cupped upward to cupping downward, or vice-versa.
It’s a neat function. It’s got some uses. It’s a natural smooth-hill shape, for example. So this makes a good generic landscape feature if you’re modeling the flow over a surface. I read that solitary waves can have this curve’s shape, too.
And the curve turns up as a probability distribution. Take a fixed point. Pick lines at random that pass through this point. See where those lines reach a separate, straight line. Some regions are more likely to be intersected than are others. Chart how often any particular line is the new intersection point. That chart will (given some assumptions I ask you to pretend you agree with) be a Witch of Agnesi curve. This might not surprise you. It seems inevitable from the circle-and-intersecting-line construction process. And that’s nice enough. As a distribution it looks like the usual Gaussian bell curve.
It’s different, though. And it’s different in strange ways. Like, for a probability distribution we can find an expected value. That’s … well, what it sounds like. But this is the strange probability distribution for which the law of large numbers does not work. Imagine an experiment that produces real numbers, with the frequency of each number given by this distribution. Run the experiment zillions of times. What’s the mean value of all the zillions of generated numbers? And it … doesn’t … have one. I mean, we know it ought to, it should be the center of that hill. But the calculations for that don’t work right. Taking a bigger sample makes the sample mean jump around more, not less, the way every other distribution should work. It’s a weird idea.
Imagine carving a block of wood in the shape of this curve, with a horizontal lower bound and the Witch of Agnesi curve as the upper bound. Where would it balance? … The normal mathematical tools don’t say, even though the shape has an obvious line of symmetry. And a finite area. You don’t get this kind of weirdness with parabolas.
(Yes, you’ll get a balancing point if you actually carve a real one. This is because you work with finitely-long blocks of wood. Imagine you had a block of wood infinite in length. Then you would see some strange behavior.)
It teaches us more strange things, though. Consider interpolations, that is, taking a couple data points and fitting a curve to them. We usually start out looking for polynomials when we interpolate data points. This is because everything is polynomials. Toss in more data points. We need a higher-order polynomial, but we can usually fit all the given points. But sometimes polynomials won’t work. A problem called Runge’s Phenomenon can happen, where the more data points you have the worse your polynomial interpolation is. The Witch of Agnesi curve is one of those. Carl Runge used points on this curve, and trying to fit polynomials to those points, to discover the problem. More data and higher-order polynomials make for worse interpolations. You get curves that look less and less like the original Witch. Runge is himself famous to mathematicians, known for “Runge-Kutta”. That’s a family of techniques to solve differential equations numerically. I don’t know whether Runge came to the weirdness of the Witch of Agnesi curve from considering how errors build in numerical integration. I can imagine it, though. The topics feel related to me.
I understand how none of this could fit that textbook’s slender footnote. I’m not sure any of the really good parts of the Witch of Agnesi could even fit thematically in that textbook. At least beyond the fact of its interesting name, which any good blog about the curve will explain. That there was no picture, and that the equation was beyond what the textbook had been describing, made it a challenge. Maybe not seeing what the shape was teased the mathematician out of this bored student.
And next is ‘X’. Will I take Mr Wu’s suggestion and use that to describe something “extreme”? Or will I take another topic or suggestion? We’ll see on Friday, barring unpleasant surprises. Thanks for reading.
This installment took longer to write than you’d figure, because it’s the time of year we’re watching a lot of mostly Rankin/Bass Christmas specials around here. So I have to squeeze words out in-between baffling moments of animation and, like, arguing whether there’s any possibility that Jack Frost was not meant to be a Groundhog Day special that got rewritten to Christmas because the networks weren’t having it otherwise.
Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 3rd is the anthropomorphic numerals joke for the week. … You know, I’ve always wondered in this sort of setting, what are two-digit numbers like? I mean, what’s the difference between a twelve and a one-and-two just standing near one another? How do people recognize a solitary number? This is a darned silly thing to wonder so there’s probably a good web comic about it.
John Hambrock’s The Brilliant Mind of Edison Lee for the 4th has Edison forecast the outcome of a basketball game. I can’t imagine anyone really believing in forecasting the outcome, though. The elements of forecasting a sporting event are plausible enough. We can suppose a game to be a string of events. Each of them has possible outcomes. Some of them score points. Some block the other team’s score. Some cause control of the ball (or whatever makes scoring possible) to change teams. Some take a player out, for a while or for the rest of the game. So it’s possible to run through a simulated game. If you know well enough how the people playing do various things? How they’re likely to respond to different states of things? You could certainly simulate that.
But all sorts of crazy things will happen, one game or another. Run the same simulation again, with different random numbers. The final score will likely be different. The course of action certainly will. Run the same simulation many times over. Vary it a little; what happens if the best player is a little worse than average? A little better? What if the referees make a lot of mistakes? What if the weather affects the outcome? What if the weather is a little different? So each possible outcome of the sporting event has some chance. We have a distribution of the possible results. We can judge an expected value, and what the range of likely outcomes is. This demands a lot of data about the players, though. Edison Lee can have it, I suppose. The premise of the strip is that he’s a genius of unlimited competence. It would be more likely to expect for college and professional teams.
Brian Basset’s Red and Rover for the 4th uses arithmetic as the homework to get torn up. I’m not sure it’s just a cameo appearance. It makes a difference to the joke as told that there’s division and long division, after all. But it could really be any subject.
I liked that episode. I’ve got happy memories of the time when I first saw it. I thought the sketch in which Crow T Robot got so volume-obsessed was goofy and dumb in the fun-nerd way.
I accept Mr Kassinger’s challenge only I’m going to take it seriously.
How big is a thing?
There is a legend about Thomas Edison. He was unimpressed with a new hire. So he hazed the college-trained engineer who deeply knew calculus. He demanded the engineer tell him the volume within a light bulb. The engineer went to work, making measurements of the shape of the bulb’s outside. And then started the calculations. This involves a calculus technique called “volumes of rotation”. This can tell the volume within a rotationally symmetric shape. It’s tedious, especially if the outer edge isn’t some special nice shape. Edison, fed up, took the bulb, filled it with water, poured that out into a graduated cylinder and said that was the answer.
I’m skeptical of legends. I’m skeptical of stories about the foolish intellectual upstaged by the practical man-of-action. And I’m skeptical of Edison because, jeez, I’ve read biographies of the man. Even the fawning ones make him out to be yeesh.
But the legend’s Edison had a point. If the volume of a shape is not how much stuff fits inside the shape, what is it? And maybe some object has too complicated a shape to find its volume. Can we think of a way to produce something with the same volume, but that is easier? Sometimes we can. When we do this with straightedge and compass, the way the Ancient Greeks found so classy, we call this “quadrature”. It’s called quadrature from its application in two dimensions. It finds, for a shape, a square with the same area. For a three-dimensional object, we find a cube with the same volume. Cubes are easy to understand.
Straightedge and compass can’t do everything. Indeed, there’s so much they can’t do. Some of it is stuff you’d think it should be able to, like, find a cube with the same volume as a sphere. Integration gives us a mathematical tool for describing how much stuff is inside a shape. It’s even got a beautiful shorthand expression. Suppose that D is the shape. Then its volume V is:
Here “dV” is the “volume form”, a description of how the coordinates we describe a space in relate to the volume. The is jargon, meaning, “integrate over the whole volume”. The subscript “D” modifies that phrase by adding “of D” to it. Writing “D” is shorthand for “these are all the points inside this shape, in whatever coordinate system you use”. If we didn’t do that we’d have to say, on each sign, what points are inside the shape, coordinate by coordinate. At this level the equation doesn’t offer much help. It says the volume is the sum of infinitely many, infinitely tiny pieces of volume. True, but that doesn’t give much guidance about whether it’s more or less than two cups of water. We need to get more specific formulas, usually. We need to pick coordinates, for example, and say what coordinates are inside the shape. A lot of the resulting formulas can’t be integrated exactly. Like, an ellipsoid? Maybe you can integrate that. Don’t try without getting hazard pay.
We can approximate this integral. Pick a tiny shape whose volume is easy to know. Fill your shape with duplicates of it. Count the duplicates. Multiply that count by the volume of this tiny shape. Done. This is numerical integration, sometimes called “numerical quadrature”. If we’re being generous, we can say the legendary Edison did this, using water molecules as the tiny shape. And working so that he didn’t need to know the exact count or the volume of individual molecules. Good computational technique.
It’s hard not to feel we’re begging the question, though. We want the volume of something. So we need the volume of something else. Where does that volume come from?
Well, where does an inch come from? Or a centimeter? Whatever unit you use? You pick something to use as reference. Any old thing will do. Which is why you get fascinating stories about choosing what to use. And bitter arguments about which of several alternatives to use. And we express the length of something as some multiple of this reference length.
Volume works the same way. Pick a reference volume, something that can be one unit-of-volume. Other volumes are some multiple of that unit-of-volume. Possibly a fraction of that unit-of-volume.
Usually we use a reference volume that’s based on the reference length. Typically, we imagine a cube that’s one unit of length on each side. The volume of this cube with sides of length 1 unit-of-length is then 1 unit-of-volume. This seems all nice and orderly and it’s surely not because mathematicians have paid off by six-sided-dice manufacturers.
Does it have to be?
That we need some reference volume seems inevitable. We can’t very well say the area of something is ten times nothing-in-particular. Does that reference volume have to be a cube? Or even a rectangle or something else? It seems obvious that we need some reference shape that tiles, that can fill up space by itself … right?
What if we don’t?
I’m going to drop out of three dimensions a moment. Not because it changes the fundamentals, but because it makes something easier. Specifically, it makes it easier if you decide you want to get some construction paper, cut out shapes, and try this on your own. What this will tell us about area is just as true for volume. Area, for a two-dimensional sapce, and volume, for a three-dimensional, describe the same thing. If you’ll let me continue, then, I will.
So draw a figure on a clean sheet of paper. What’s its area? Now imagine you have a whole bunch of shapes with reference areas. A bunch that have an area of 1. That’s by definition. That’s our reference area. A bunch of smaller shapes with an area of one-half. By definition, too. A bunch of smaller shapes still with an area of one-third. Or one-fourth. Whatever. Shapes with areas you know because they’re marked on them.
Here’s one way to find the area. Drop your reference shapes, the ones with area 1, on your figure. How many do you need to completely cover the figure? It’s all right to cover more than the figure. It’s all right to have some of the reference shapes overlap. All you need is to cover the figure completely. … Well, you know how many pieces you needed for that. You can count them up. You can add up the areas of all these pieces needed to cover the figure. So the figure’s area can’t be any bigger than that sum.
Can’t be exact, though, right? Because you might get a different number if you covered the figure differently. If you used smaller pieces. If you arranged them better. This is true. But imagine all the possible reference shapes you had, and all the possible ways to arrange them. There’s some smallest area of those reference shapes that would cover your figure. Is there a more sensible idea for what the area of this figure would be?
And put this into three dimensions. If we start from some reference shapes of volume 1 and maybe 1/2 and 1/3 and whatever other useful fractions there are? Doesn’t this covering make sense as a way to describe the volume? Cubes or rectangles are easy to imagine. Tetrahedrons too. But why not any old thing? Why not, as the Mystery Science Theater 3000 episode had it, turkeys?
This is a nice, flexible, convenient way to define area. So now let’s see where it goes all bizarre. We know this thanks to Giuseppe Peano. He’s among the late-19th/early-20th century mathematicians who shaped modern mathematics. They did this by showing how much of our mathematics broke intuition. Peano was (here) exploring what we now call fractals. And noted a family of shapes that curl back on themselves, over and over. They’re beautiful.
And they fill area. Fill volume, if done in three dimensions. It seems impossible. If we use this covering scheme, and try to find the volume of a straight line, we get zero. Well, we find that any positive number is too big, and from that conclude that it has to be zero. Since a straight line has length, but not volume, this seems fine. But a Peano curve won’t go along with this. A Peano curve winds back on itself so much that there is some minimum volume to cover it.
This unsettles. But this idea of volume (or area) by covering works so well. To throw it away seems to hobble us. So it seems worth the trade. We allow ourselves to imagine a line so long and so curled up that it has a volume. Amazing.
And now I get to relax and unwind and enjoy a long weekend before coming to the letter ‘W’. That’ll be about some topic I figure I can whip out a nice tight 500 words about, and instead, produce some 1541-word monstrosity while I wonder why I’ve had no free time at all since August. Tuesday, give or take, it’ll be available at this link, as are the rest of these glossary posts. Thanks for reading.
Today, I get to wrap up November’s suggested discussion topics as prepared by Comic Strip Master Command.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 27th mentions along its way the Liar Paradox and Zeno’s Paradoxes. Both are ancient problems. The paradoxes arise from thinking with care and rigor about things we seem to understand intuitively. For the Liar Paradox it’s about what we mean to declare a statement true or false. For Zeno’s Paradoxes it’s about whether we think space (and time) are continuous or discrete. And, as the strip demonstrates, there is a particular kind of nerd that declares the obvious answer is the only possible answer and that it’s foolish to think deeper. To answer a question’s literal words while avoiding its point is a grand old comic tradition, of course, predating even the antijoke about chickens crossing roads. Which is what gives these answers the air of an old stage comedian.
Mark Tatulli’s Lio for the 28th features a cameo for mathematics. At least mathematics class. It’s painted as the most tedious part of the school day. I’m not sure this is quite right for Lio as a character. He’s clever in a way that I think harmonizes well with how mathematics brings out universal truths. But there is a difference between mathematics and mathematics class, of course.
Tom Toles’s Randolph Itch, 2am for the 28th shows how well my resolution to drop the strip from my rotation here has gone. I don’t seem to have found it worthy of mention before, though. It plays on the difference between a note of money, the number of units of currency that note represents, and between “zero” and “nothing”. Also I’m enchanted now by the idea that maybe some government might publish a zero-dollar bill. At least for the sake of movie and television productions that need realistic-looking cash.
In the footer joke Randolph mentions how you can never have enough zeroes. Yes, but I’d say that’s true of twenties, too. There is a neat sense in which this is true for working mathematicians, though. At least for those doing analysis. One of the reliable tricks that we learn to do in analysis is to “add zero” to a quantity. This is, literally, going from some expression that might be, say, “a – b” to “a + 0 – b”, which of course has the same value. The point of doing that is that we know other things equal to zero. For example, for any number L, “-L + L” is zero. So we get the original expression from “a + 0 – b” over to “a – L + L – b”. And that becomes useful is you picked L so that you know something about “a – L” and about “L – b”. Because then it tells you something about “a – b” that you didn’t know before. Picking that L, and showing something true about “a – L” and “L – b”, is the tricky part.
The Dumpties in the comic strip are presented as getting nauseated at the strange curling around. It’s good sense for the comic-in-the-comic, which just has to have something happen and doesn’t really need to make sense. But there is no real way to answer where a Möbius strip wraps around itself. I mean, we can declare it’s at the left and right ends of the strip as we hold it, sure. But this is an ad hoc placement. We can roll the belt along a little bit, not changing its shape, but changing the points where we think of the strip as turning over.
But suppose you were a flat creature, wandering a Möbius strip. Would you have any way to tell that you weren’t on the plane? You could, but it takes some subtle work. Like, you could try drawing shapes. These let you count a thing called the Euler Characteristic, which relates the numer of vertices, edges, and faces of a polyhedron. The Euler Characteristic for a Möbius strip is the same as that for a Klein bottle, a cylinder, or a torus. You could try drawing regions, and coloring them in, calling on the four-color map theorem. (Here I want just to mention the five-color map theorem, which is as these things go easy to prove.) A map on the plane needs at most four colors to have no neighboring territories share a color along an edge. (Territories here are contiguous, and we don’t count territories meeting at only a point as sharing an edge.) Same for a sphere, which is good for we folks who have the job of coloring in both globes and atlases. It’s also the same for a cylinder. On a Möbius strip, this number is six. On a torus, it’s seven. So we could tell, if we were on a Möbius strip, that we were. It can be subtle to prove, is all.
My subject for today is another from Iva Sallay, longtime friend of the blog and creator of the Find the Factors recreational mathematics game. I think you’ll likely find something enjoyable at her site, whether it’s the puzzle or the neat bits of trivia as she works through all the counting numbers.
We don’t notice how unit fractions are around us. Likely there’s some in your pocket. Or there have been recently. Think of what you do when paying for a thing, when it’s not a whole number of dollars. (Pounds, euros, whatever the unit of currency is.) Suppose you have exact change. What do you give for the 38 cents?
Likely it’s something like a 25-cent piece and a 10-cent piece and three one-cent pieces. This is an American and Canadian solution. I know that 20-cent pieces are more common than 25-cent ones worldwide. It doesn’t make much difference; if you want it to be three 10-cent, one five-cent, and three one-cent pieces that’s as good. And granted, outside the United States it’s growing common to drop pennies altogether and round prices off to a five- or ten-cent value. Again, it doesn’t make much difference.
But look at the coins. The 25 cent piece is one-quarter of a dollar. It’s even called that, and stamped that on one side. I sometimes hear a dime called “a tenth of a dollar”, although mostly by carnival barkers in one-reel cartoons of the 1930s. A nickel is one-twentieth of a dollar. A penny is one-hundredth. A 20-cent piece is one-fifth of a dollar. And there are half-dollars out there, although not in the United States, not really anymore.
(Pre-decimalized currencies offered even more unit fractions. Using old British coins, for familiarity-to-me and great names, there were farthings, 1/960th of a pound; halfpennies, 1/480th; pennies, 1/240th; threepence, 1/80th of a pound; groats, 1/60th; sixpence, 1/40th; florins, 1/10th; half-crowns, 1/8th; crowns, 1/4th. And what seem to the modern wallet like impossibly tiny fractions like the half-, third-, and quarter-farthings used where 1/3840th of a pound might be a needed sum of money.)
Unit fractions get named and defined somewhere in elementary school arithmetic. They go on, becoming forgotten sometime after that. They might make a brief reappearance in calculus. There are some rational functions that get easier to integrate if you think of them as the sums of fractions, with constant numerators and polynomial denominators. These aren’t unit fractions. A unit fraction has a 1, the unit, in the numerator. But we see units along the way to integrating as an example. And see it in the promise that there are still more amazing integrals to learn how to do.
They get more attention if you take a history of computation class. Or read the subject on your own. Unit fractions stand out in history. We learn the Ancient Egyptians worked with fractions as sums of unit fractions. That is, had they dollars, they would not look at the we do. They would look at plus plus plus plus . When we count change we are using, without noticing it, a very old computing scheme.
This isn’t quite true. The Ancient Egyptians seemed to shun repeating a unit like that. To use once is fine; three times is suspicious. They would prefer something like plus plus . Or maybe some other combination. I just wrote out the first one I found.
But there are many ways we can make 38 cents using ordinary coins of the realm. There are infinitely many ways to make up any fraction using unit fractions. There’s surely a most “efficient”. Most efficient might be the one which uses the fewest number of terms. Most efficient might be the one that uses the smallest denominators. Choose what you like; no one knows a scheme that always turns up the most efficient, either way. We can always find some representation, though. It may not be “good”, but it will exist, which may be good enough. Leonardo of Pisa, or as he got named in the 19th century, Fibonacci, proved that was true.
We may ask why the Egyptians used unit fractions. They seem inefficient compared to the way we work with fractions. Or, better, decimals. I’m not sure the question can have a coherent answer. Why do we have a fashion for converting fractions to a “proper” form? Why do we use the number of decimal points we do for a given calculation? Sometimes a particular mode of expression is the fashion. It comes to seem natural because everyone uses it. We do it too.
And there is practicality to them. Even efficiency. If you need π, for example, you can write it as 3 plus plus and your answer is off by under one part in a thousand. Combine this with the Egyptian method of multiplication, where you would think of (say) “11 times π” as “1 times π plus 2 times π plus 8 times π”. And with tables they had worked up which tell you what and would be in a normal representation. You can get rather good calculations without having to do more than addition and looking up doublings. Represent π as 3 plus plus plus and you’re correct to within one part in 130 million. That isn’t bad for having to remember four whole numbers.
(The Ancient Egyptians, like many of us, were not absolutely consistent in only using unit fractions. They had symbols to represent and , probably due to these numbers coming up all the time. Human systems vary to make the commonest stuff we do easier.)
Enough practicality or efficiency, if this is that. Is there beauty? Is there wonder? Certainly. Much of it is in number theory. Number theory splits between astounding results and results that would be astounding if we had any idea how to prove them. Many of the astounding results are about unit fractions. Take, for example, the harmonic series . Truncate that series whenever you decide you’ve had enough. Different numbers of terms in this series will add up to different numbers. Eventually, infinitely many numbers. The numbers will grow ever-higher. There’s no number so big that it won’t, eventually, be surpassed by some long-enough truncated harmonic series. And yet, past the number 1, it’ll never touch a whole number again. Infinitely many partial sums. Partial sums differing from one another by one-googol-plex and smaller. And yet of the infinitely many whole numbers this series manages to miss them all, after its starting point. Worse, any set of consecutive terms, not even starting from 1, will never hit a whole number. I can understand a person who thinks mathematics is boring, but how can anyone not find it astonishing?
There are more strange, beautiful things. Consider heptagonal numbers, which Iva Sallay knows well. These are numbers like 1 and 7 and 18 and 34 and 55 and 1288. Take a heptagonal number of, oh, beads or dots or whatever, and you can lay them out to form a regular seven-sided figure. Add together the reciprocals of the heptagonal numbers. What do you get? It’s a weird number. It’s irrational, which you maybe would have guessed as more likely than not. But it’s also transcendental. Most real numbers are transcendental. But it’s often hard to prove any specific number is.
Unit fractions creep back into actual use. For example, in modular arithmetic, they offer a way to turn division back into multiplication. Division, in modular arithmetic, tends to be hard. Indeed, if you need an algorithm to make random-enough numbers, you often will do something with division in modular arithmetic. Suppose you want to divide by a number x, modulo y, and x and y are relatively prime, though. Then unit fractions tell us how to turn this into finding a greatest common denominator problem.
They teach us about our computers, too. Much of serious numerical mathematics involves matrix multiplication. Matrices are, for this purpose, tables of numbers. The Hilbert Matrix has elements that are entirely unit fractions. The Hilbert Matrix is really a family of square matrices. Pick any of the family you like. It can have two rows and two columns, or three rows and three columns, or ten rows and ten columns, or a million rows and a million columns. Your choice. The first row is made of the numbers and so on. The second row is made of the numbers and so on. The third row is made of the numbers and so on. You see how this is going.
Matrices can have inverses. It’s not guaranteed; matrices are like that. But the Hilbert Matrix does. It’s another matrix, of the same size. All the terms in it are integers. Multiply the Hilbert Matrix by its inverse and you get the Identity Matrix. This is a matrix, the same number of rows and columns as you started with. But nearly every element in the identity matrix is zero. The only exceptions are on the diagonal — first row, first column; second row, second column; third row, third column; and so on. There, the identity matrix has a 1. The identity matrix works, for matrix multiplication, much like the real number 1 works for normal multiplication.
Matrix multiplication is tedious. It’s not hard, but it involves a lot of multiplying and adding and it just takes forever. So set a computer to do this! And you get … uh ..
For a small Hilbert Matrix and its inverse, you get an identity matrix. That’s good. For a large Hilbert Matrix and its inverse? You get garbage. Large isn’t maybe very large. A 12 by 12 matrix gives you trouble. A 14 by 14 matrix gives you a mess. Well, on my computer it does. Cute little laptop I got when my former computer suddenly died. On a better computer? One designed for computation? … You could do a little better. Less good than you might imagine.
The trouble is that computers don’t really do mathematics. They do an approximation of it, numerical computing. Most use a scheme called floating point arithmetic. It mostly works well. There’s a bit of error in every calculation. For most calculations, though, the error stays small. At least relatively small. The Hilbert Matrix, built of unit fractions, doesn’t respect this. It and its inverse have a “numerical instability”. Some kinds of calculations make errors explode. They’ll overwhelm the meaningful calculation. It’s a bit of a mess.
Numerical instability is something anyone doing mathematics on the computer must learn. Must grow comfortable with. Must understand. The matrix multiplications, and inverses, that the Hilbert Matrix involves highlights those. A great and urgent example of a subtle danger of computerized mathematics waits for us in these unit fractions. And we’ve known and felt comfortable with them for thousands of years.
There’ll be some mathematical term with a name starting ‘V’ that, barring surprises, should be posted Friday. What’ll it be? I have an idea at least. It’ll be available at this link, as are the rest of these glossary posts.
I knew that November 2018 was going to be a less busy month around here than October would. I didn’t have the benefit of hosting the Playful Mathematics Education Blog Carnival for it. I’m hoping to host the carnival again, though. Not until after the new year. Not until after I’ve finished the Fall 2018 A To Z and have had some time to recuperate. It’s a weird thing but writing two 1500-to-2000-word essays each week hasn’t lightened my workload the way I figured. If you’re interested in the current Blog Carnival, by the way, here it is. Anyway, as reversions to the norm go, November was not bad. Here’s what it looked like.
So there were 1,611 pages viewed here in November. Down from the 2,010 of October, but noticeably higher than September’s 1,505. That’s still a third-highest month (March 2018 was busier still). But it’s weirdly gratifying. There were 847 unique visitors logged in November. That’s down from October’s 1,063, and even September’s 874. I make this out as my fifth-most-visitors month on record. All those months have been this year.
85 things got liked in November. That’s down from October’s 94, up from September’s 65, and overall part of a weird pattern. My likes are definitely declining over time. But there’s little local peaks. If there’s any pattern it’s kind of a sawtooth, with the height of the teeth dropping. I have no explanation for this phenomenon. There were 36 comments in November, well down from October’s 60, but equal to September’s. It’s above the running average of the last two months (28.5 comments per month) but it’s still well below, like, the average commentary you can expect on the Comics Curmudgeon. Granted, we serve different purposes.
Of the most popular essays this month the top two were perennials. Some A to Z stuff filled out the rest. I’m including the top six posts here there was a tie for fourth place, and sixth place was barely behind that. If this reason seems ad hoc, you understand it correctly. Read a lot around here were:
And where were all these readers coming from? Here’s the roster of countries and their readership totals:
Hong Kong SAR China
United Arab Emirates
70 countries sent me readers in November 2018. That’s down from October’s 74 but up from September’s 58. 13 of them were single-reader countries, down from October’s 23 and September’s 14. Czech Republic has been a single-reader country for three months. Colombia for four months now.
According to the Insights panel, I start the month at 71,506 total page views for the 1,185 posts I’ve done altogether. It also records 35,384 unique visitors, but I again have to defensively insist WordPress didn’t count unique visitors for the first couple months I was around here. I swear.
I published 23 posts in October. A to Z months tend to be busy ones. These posts held something like 26,644 words in total. For the 165 things I had posted this year, through to the start of December, I averaged 1,108 words per post. That’s up from the start of November’s 996 words per post, but still. I’m averaging 5.3 likes per post, and 2.7 comments per post. At the start of last month I was averaging 5.5 likes and 2.8 comments per post. This is probably not any important kind of variation. There’ve been 450 total comments and 870 total likes this year, as of the start of December.
Last week Comic Strip Master Command sent out just enough on-theme comics for two essays, the way I do them these days. The first half has some multiplication in two of the strips. So that’s enough to count as a theme for me.
Aaron Neathery’s Endtown for the 26th depicts a dreary, boring school day by using arithmetic. A lot of times tables. There is some credible in-universe reason to be drilling on multiplication like this. The setting is one where the characters can’t expect to have computers available. That granted, I’m not sure there’s a point to going up to memorizing four times 27. Going up to twelve-times seems like enough for common uses. For multiplying two- and longer-digit numbers together we usually break the problem up into a string of single-digit multiplications.
There are a handful of bigger multiplications that can make your life easier to know, like how four times 25 is 100. Or three times 33 is pretty near 100. But otherwise? … Of course, the story needs the class to do something dull and seemingly pointless. Going deep into multiplication tables communicates that to the reader quickly.
Thaves’s Frank and Ernest for the 26th is a spot of wordplay. Also a shout-out to my friends who record mathematics videos for YouTube. It is built on the conflation between the ideas of something multiplying and the amount of something growing. It’s easy to see where the idea comes from; just keep hitting ‘x 2’ on a calculator and the numbers grow excitingly fast. You get even more exciting results with ‘x 3’ or ‘x π’. But multiplying by 1 is still multiplication. As is multiplying by a number smaller than 1. Including negative numbers. That doesn’t hurt the joke any. That multiplying two things together doesn’t necessarily give you something larger is a consideration when you’re thinking rigorously about what multiplication can do. It doesn’t have to be part of normal speech.
Nate Frakes’s Break of Day for the 27th is the anthropomorphic numerals joke for the week. I don’t know that there’s anything in the other numerals being odds rather than evens, or a mixture of odds and evens. It might just be that they needed to be anything but 1.
Here is a surprising thought for the next time you consider remodeling the kitchen. It’s common to tile the floor. Perhaps some of the walls behind the counter. What patterns could you use? And there are infinitely many possibilities. You might leap ahead of me and say, yes, but they’re all boring. A tile that’s eight inches square is different from one that’s twelve inches square and different from one that’s 12.01 inches square. Fine. Let’s allow that all square tiles are “really” the same pattern. The only difference between a square two feet on a side and a square half an inch on a side is how much grout you have to deal with. There are still infinitely many possibilities.
You might still suspect me of being boring. Sure, there’s a rectangular tile that’s, say, six inches by eight inches. And one that’s six inches by nine inches. Six inches by ten inches. Six inches by one millimeter. Yes, I’m technically right. But I’m not interested in that. Let’s allow that all rectangular tiles are “really” the same pattern. So we have “squares” and “rectangles”. There are still infinitely many tile possibilities.
Let me shorten the discussion here. Draw a quadrilateral. One that doesn’t intersect itself. That is, there’s four corners, four lines, and there’s no X crossings. If you have that, then you have a tiling. Get enough of these tiles and arrange them correctly and you can cover the plane. Or the kitchen floor, if you have a level floor. It might not be obvious how to do it. You might have to rotate alternating tiles, or set them in what seem like weird offsets. But you can do it. You’ll need someone to make the tiles for you, if you pick some weird pattern. I hope I live long enough to see it become part of the dubious kitchen package on junk home-renovation shows.
Let me broaden the discussion here. What do I mean by a tiling if I’m allowing any four-sided figure to be a tile? We start with a surface. Usually the plane, a flat surface stretching out infinitely far in two dimensions. The kitchen floor, or any other mere mortal surface, approximates this. But the floor stops at some point. That’s all right. The ideas we develop for the plane work all right for the kitchen. There’s some weird effects for the tiles that get too near the edges of the room. We don’t need to worry about them here. The tiles are some collection of open sets. No two tiles overlap. The tiles, plus their boundaries, cover the whole plane. That is, every point on the plane is either inside exactly one of the open sets, or it’s on the boundary between one (or more) sets.
There isn’t a requirement that all these sets have the same shape. We usually do, and will limit our tiles to one or two shapes endlessly repeated. It seems to appeal to our aesthetics and our installation budget. Using a single pattern allows us to cover the plane with triangles. Any triangle will do. Similarly any quadrilateral will do. For convex pentagonal tiles — here things get weird. There are fourteen known families of pentagons that tile the plane. Each member of the family looks about the same, but there’s some room for variation in the sides. Plus there’s one more special case that can tile the plane, but only that one shape, with no variation allowed. We don’t know if there’s a sixteenth pattern. But then until 2015 we didn’t know there was a 15th, and that was the first pattern found in thirty years. Might be an opening for someone with a good eye for doodling.
There are also exciting opportunities in convex hexagons. Anyone who plays strategy games knows a regular hexagon will tile the plane. (Regular hexagonal tilings fit a certain kind of strategy game well. Particularly they imply an equal distance between the centers of any adjacent tiles. Square and triangular tiles don’t guarantee that. This can imply better balance for territory-based games.) Irregular hexagons will, too. There are three known families of irregular hexagons that tile the plane. You can treat the regular hexagon as a special case of any of these three families. No one knows if there’s a fourth family. Ready your notepad at the next overlong, agenda-less meeting.
There aren’t tilings for identical convex heptagons, figures with seven sides. Nor eight, nor nine, nor any higher figure. You can cover them if you have non-convex figures. See any Tetris game where you keep getting the ‘s’ or ‘t’ shapes. And you can cover them if you use several shapes.
There’s some guidance if you want to create your own periodic tilings. I see it called the Conway Criterion. I don’t know the field well enough to say whether that is a common term. It could be something one mathematics popularizer thought of and that other popularizers imitated. (I don’t find “Conway Criterion” on the Mathworld glossary, but that isn’t definitive.) Suppose your polygon satisfies a couple of rules about the shapes of the edges. The rules are given in that link earlier this paragraph. If your shape does, then it’ll be able to tile the plane. If you don’t satisfy the rules, don’t despair! It might yet. The Conway Criterion tells you when some shape will tile the plane. It won’t tell you that something won’t.
(The name “Conway” may nag at you as familiar from somewhere. This criterion is named for John H Conway, who’s famous for a bunch of work in knot theory, group theory, and coding theory. And in popular mathematics for the “Game of Life”. This is a set of rules on a grid of numbers. The rules say how to calculate a new grid, based on this first one. Iterating them, creating grid after grid, can make patterns that seem far too complicated to be implicit in the simple rules. Conway also developed an algorithm to calculate the day of the week, in the Gregorian calendar. It is difficult to explain to the non-calendar fan how great this sort of thing is.)
This has all gotten to periodic tilings. That is, these patterns might be complicated. But if need be, we could get them printed on a nice square tile and cover the floor with that. Almost as beautiful and much easier to install. Are there tilings that aren’t periodic? Aperiodic tilings?
Well, sure. Easily. Take a bunch of tiles with a right angle, and two 45-degree angles. Put any two together and you have a square. So you’re “really” tiling squares that happen to be made up of a pair of triangles. Each pair, toss a coin to decide whether you put the diagonal as a forward or backward slash. Done. That’s not a periodic tiling. Not unless you had a weird run of luck on your coin tosses.
All right, but is that just a technicality? We could have easily installed this periodically and we just added some chaos to make it “not work”. Can we use a finite number of different kinds of tiles, and have it be aperiodic however much we try to make it periodic? And through about 1966 mathematicians would have mostly guessed that no, you couldn’t. If you had a set of tiles that would cover the plane aperiodically, there was also some way to do it periodically.
And then in 1966 came a surprising result. No, not Penrose tiles. I know you want me there. I’ll get there. Not there yet though. In 1966 Robert Berger — who also attended Rensselaer Polytechnic Institute, thank you — discovered such a tiling. It’s aperiodic, and it can’t be made periodic. Why do we know Penrose Tiles rather than Berger Tiles? Couple reasons, including that Berger has to use 20,426 distinct tile shapes. In 1971 Raphael M Robinson simplified matters a bit and got that down to six shapes. Roger Penrose in 1974 squeezed the set down to two, although by adding some rules about what edges may and may not touch one another. (You can turn this into a pure edges thing by putting notches into the shapes.) That really caught the public imagination. It’s got simplicity and accessibility to combine with beauty. Aperiodic tiles seem to relate to “quasicrystals”, which are what the name suggests and do happen in some materials. And they’ve got beauty. Aperiodic tiling embraces our need to have not too much order in our order.
I’ve discussed, in all this, tiling the plane. It’s an easy surface to think about and a popular one. But we can form tiling questions about other shapes. Cylinders, spheres, and toruses seem like they should have good tiling questions available. And we can imagine “tiling” stuff in more dimensions too. If we can fill a volume with cubes, or rectangles, it’s natural to wonder what other shapes we can fill it with. My impression is that fewer definite answers are known about the tiling of three- and four- and higher-dimensional space. Possibly because it’s harder to sketch out ideas and test them. Possibly because the spaces are that much stranger. I would be glad to hear more.
I’m not sure there is a theme to the back half of last week’s mathematically-based comic strips. If there is, it’s about showing some origins of things. I’ll go with that title, then.
Bill Holbrook’s On The Fastrack for the 21st is another in the curious thread of strips about Fi talking about mathematics. She’s presented as doing a good job inspiring kids to appreciate mathematics as a fun, exciting, interesting thing to think about. It’s good work. And I hope this does not sound like I am envious of a more successful, if fictional, mathematics popularizer. But I don’t see much in the strip of her doing this side job well. That is, of making the case that mathematics is worth the time spent on it. That’s a lot to ask given the confines of a syndicated daily newspaper comic strip, yes. What we can expect is some hint of what the actual good argument would look like. But this particular day’s strip rings false to me, for example. I don’t see how “here’s some pizza — but first, here’s a pop quiz” makes mathematics look as something other than a chore.
Pizza area offers many ways into mathematical ideas. How the area depends on the size of the pizza, for example. How the area depends on the shape, even independently of the size. How to slice a pizza fairly, especially if it’s not to be between four or six or eight people. What is the strangest shape you could make that would give people equal areas? Just the way slices intersect at angles inspires neat little geometry problems. How you might arrange toppings opens up symmetries and tilings, which are surprisingly big areas of mathematics. Setting problems on a pizza gives them a tangibility that could help capture young minds, surely. But I can’t make myself believe that this is a conversation to have when the pizza is entering the room.
Mike Peters’s Mother Goose and Grimm for the 22nd is a lottery joke. So if we suppose this was written about the last time the Powerball jackpot reached a half-billion dollars we can work out how far ahead of publication Mike Peters is working. One solid argument against ever buying a lottery ticket is, as Grimm notes, that you have zero chance of winning. (I’m open to an argument based on expectation value. And even more, I don’t object to people spending a reasonable bit of disposable income “foolishly”.) Mother Goose argues that her chances are vastly worse if she doesn’t buy a ticket. This is true. Are her chances “astronomically” worse? … That depends. A one in three hundred million chance (to use, roughly, the Powerball odds) is so small that it won’t happen to you. Is that any different than a zero in three hundred million chance [*]? Or than a six in three hundred million chance? In any case it won’t happen to you.
[*] Do you actually have zero chance of winning if you don’t have a ticket? I say no, you don’t. Someone might give you a winning ticket. Maybe you find one as a bookmark in a library book. Maybe you find it on the street and figure, what the heck, I’ll check. Unlikely? Sure. But impossible? Hardly.
Johnny Hart’s Back to BC for the 22nd has the form of the world’s oldest story problem. It could also be a joke about the discovery of the concept of zero and the struggle to understand it as a number. Given that clams are used as currency in the BC setting it also shows how finance has driven mathematical development. So the strip actually packs a fair bit of stuff into two panels. … And I’ll admit I’m not quite sure the joke parses, but if you read it quickly it looks like a good enough joke.
Johnny Hart’s Back to BC for the 24th is a more obvious joke. And it’s built on the learning abilities of animals, and the number sense of animals. A large animal stomping a foot evokes, to me at least, Clever Hans. This is a horse presented in the early 20th century as being able to actually do arithmetic. The horse would be given a question and would stop his hoof enough times to get to the right answer. However good the horse’s number sense might be, he had quite good behavioral sense. It turned out — after brilliant and pioneering work in animal cognition — that Hans was observing his trainer’s body language. When Wilhelm von Osten was satisfied that there’d been the right number of stomps, the horse stopped. This is sometimes presented as Hans `merely’ taking subconscious cues from his trainer. But consider how carefully the horse must be observing an animal with a very different body, and how it must have understood cues of satisfaction. I can’t call that `mere’. And the work of tracking down a signal that von Osten himself did not know he was sending (and, apparently, never accepted that he did) is also amazing. It serves as a reminder how hard biologists and zoologists have to work.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 24th gives a bit of Dad History about perspective. And, particularly, why artists didn’t seem to use it much before the 16th century. It gets more blatantly tied to mathematics by pointing out how it took ten thousand years of civilization to get Cartesian coordinates. We can argue about how many years civilization has been around. But it does seem strange that we went along for certainly the majority of that time without Cartesian coordinates. They seem so obvious it’s almost hard to not think of them. Many good ideas have such a legacy.
It’s easy to say why older pictures didn’t use perspective, though. For the most part, artists didn’t think perspective gave them something they wanted to show. Ancient peoples knew of perspective. It’s not as if ancient peoples were any dumber than we are, or any less able to look at square tiles held at different angles and at different distances. But we can convey information about the importance of things, or the flow of action of things, using position and relative size. That can be more important than showing that yes, an artist is aware that a square building far away looks small.
I’m less sure what I know about the history of coordinate systems, though, and particularly why it took until René Descartes to describe them. We have a legend of Descartes laying in bed, watching a fly on the tiled ceiling, and realizing he could describe where the fly was by what row and column of tile it was on. (In the past I have written this as though it happened. In writing this essay I went looking for a primary source and found nobody seems to have one. I shall try not to pass it on again without being very clear that it is just a legend.) But there have been tiled floors and walls and ceilings for a very long time. There have been flies even longer. Why didn’t anyone notice this?
One answer may be that they did. We just haven’t heard about it, because it was found by someone who didn’t catch the interest of a mathematical community. There’s likely a lot of such lost mathematics out there. But still, why not? Wouldn’t anyone with a mathematical inclination see that this is plainly a great discovery? And maybe not. What made Cartesian coordinates great was the realization that arithmetic and geometry, previously seen as separate liberal arts, were duals. A problem in one had an expression as a problem in the other. If you don’t make that connection, then Cartesian coordinates don’t solve any problems you have. They’re just a new way to index things you didn’t need indexed. So that would slow down using them any.
Today’s topic is the lone (so far) request by bunnydoe, so I’m under pressure to make it decent. If she or anyone else would like to nominate subjects for the letters U through Z, please drop me a note at this post. I keep fooling myself into thinking I’ll get one done in under 1200 words.
This is a story which makes a capitalist look kind of good. I make no vouches for its truth, or even, at this remove, where I got it. The story as I heard it was about Ray Kroc, who made McDonald’s into a thing people of every land can complain about. The story has him demonstrate skepticism about the use of business consultants. A consultant might find, for example, that each sesame-seed hamburger bun has (say) 43 seeds. And that if they just cut it down to 41 seeds then each franchise would save (say) $50,000 annually. And no customer would notice the difference. Fine; trim the seeds a little. The next round of consultant would point out, cutting from 41 seeds to 38 would save a further $65,000 per store per year. And again no customer would notice the difference. Cut to 36 seeds? No customer would notice. This process would end when each bun had three sesame seeds, and the customers notice.
I mention this not for my love of sesame-seed buns. It’s just a less-common version of the Sorites Paradox. It’s a very old logical problem. We draw it, and its name, from the Ancient Greek philosophers. In the oldest form, it’s about a heap of sand, and which grain of sand’s removal destroys the heap. This form we attribute to Eubulides of Miletus. Eubulides is credited with a fair number of logical paradoxes. One of them we all know, the Liar Paradox, “What I am saying now is a lie”. Another, the Horns Paradox, I hadn’t encountered before researching this essay. But it bids fair to bring me some delight every day of the rest of my life. “What you have not lost, you have. But you have not lost horns. Therefore you have horns.” Eubelides has a bunch of other paradoxes. Some read, to my uninformed eye, like restatements of other paradoxes. Some look ready to be recast as arguments about Lois Lane’s relationship with Superman. Miletus we know because for a good stretch there every interesting philosopher was hanging around Miletus.
Part of the paradox’s intractability must be that it’s so nearly induction. Induction is a fantastic tool for mathematical problems. We couldn’t do without it. But consider the argument. If a bun is unsatisfying, one more seed won’t make it satisfying. A bun with one seed is unsatisfying. Therefore all buns have an unsatisfying number of sesame seeds on them. It suggests there must be some point at which “adding one more seed won’t help” stops being true. Fine; where is that point, and why isn’t it one fewer or one more seed?
A certain kind of nerd has a snappy answer for the Sorites Paradox. Test a broad population on a variety of sesame-seed buns. There’ll be some so sparse that nearly everyone will say they’re unsatisfying. There’ll be some so abundant most everyone agrees they’re great. So there’s the buns most everyone says are fine. There’s the buns most everyone says are not. The dividing line is at any point between the sparsest that satisfy most people and the most abundant that don’t. The nerds then declare the problem solved and go off. Let them go. We were lucky to get as much of their time as we did. They’re quite busy solving what “really” happened for Rashomon. The approach of “set a line somewhere” is fine if all want is guidance on where to draw a line. It doesn’t help say why we can anoint some border over any other. At least when we use a river as border between states we can agree going into the water disrupts what we were doing with the land. And even then we have to ask what happens during droughts and floods, and if the river is an estuary, how tides affect matters.
We might see an answer by thinking more seriously about these sesame-seed buns. We force a problem by declaring that every bun is either satisfying or it is not. We can imagine buns with enough seeds that we don’t feel cheated by them, but that we also don’t feel satisfied by. This reflects one of the common assumptions of logic. Mathematicians know it as the Law of the Excluded Middle. A thing is true or it is not true. There is no middle case. This is fine for logic. But for everyday words?
It doesn’t work when considering sesame-seed buns. I can imagine a bun that is not satisfying, but also is not unsatisfying. Surely we can make some logical provision for the concept of “meh”. Now we need not draw some arbitrary line between “satisfying” and “unsatisfying”. We must draw two lines, one of them between “unsatisfying” and “meh”. There is a potential here for regression. Also for the thought of a bun that’s “satisfying-meh-satisfying by unsatisfying”. I shall step away from this concept.
But there are more subtle ways to not exclude the middle. For example, we might decide a statement’s truth exists on a spectrum. We can match how true a statement is to a number. Suppose an obvious falsehood is zero; an unimpeachable truth is one, and normal mortal statements somewhere in the middle. “This bun with a single sesame seed is satisfying” might have a truth of 0.01. This perhaps reflects the tastes of people who say they want sesame seeds but don’t actually care. “This bun with fifteen sesame seeds is satisfying” might have a truth of 0.25, say. “This bun with forty sesame seeds is satisfying” might have a truth of 0.97. (It’s true for everyone except those who remember the flush times of the 43-seed bun.) This seems to capture the idea that nothing is always wholly anything. But we can still step into absurdity. Suppose “this bun with 23 sesame seeds is satisfying” has a truth of 0.50. Then “this bun with 23 sesame seeds is not satisfying” should also have a truth of 0.50. What do we make of the statement “this bun with 23 sesame seeds is simultaneously satisfying and not satisfying”? Do we make something different to “this bun with 23 sesame seeds is simultaneously satisfying and satisfying”?
I see you getting tired in the back there. This may seem like word games. And we all know that human words are imprecise concepts. What has this to do with logic, or mathematics, or anything but the philosophy of language? And the first answer is that we understand logic and mathematics through language. When learning mathematics we get presented with definitions that seem absolute and indisputable. We start to see the human influence in mathematics when we ask why 1 is not a prime number. Later we see things like arguments about whether a ring has a multiplicative identity. And then there are more esoteric debates about the bounds of mathematical concepts.
Perhaps we can think of a concept we can’t describe in words. If we don’t express it to other people, the concept dies with us. We need words. No, putting it in symbols does not help. Mathematical symbols may look like slightly alien scrawl. But they are shorthand for words, and can be read as sentences, and there is this fuzziness in all of them.
And we find mathematical properties that share this problem. Consider: what is the color of the chemical element flerovium? Before you say I just made that up, flerovium was first synthesized in 1998, and officially named in 2012. We’d guess that it’s a silvery-white or maybe grey metallic thing. Humanity has only ever observed about ninety atoms of the stuff. It’s, for atoms this big, amazingly stable. We know an isotope of it that has a half-life of two and a half seconds. But it’s hard to believe we’ll ever have enough of the stuff to look at it and say what color it is.
That’s … all right, though? Maybe? Because we know the quantum mechanics that seem to describe how atoms form. And how they should pack together. And how light should be absorbed, and how light should be emitted, and how light should be scattered by it. At least in principle. The exact answers might be beyond us. But we can imagine having a solution, at least in principle. We can imagine the computer that after great diligent work gives us a picture of what a ten-ton lump of flerovium would look like.
So where does its color come from? Or any of the other properties that these atoms have as a group? No one atom has a color. No one atom has a density, either, or a viscosity. No one atom has a temperature, or a surface tension, or a boiling point. In combination, though, they have.
These are known to statistical mechanics, and through that thermodynamics, as intensive properties. If we have a partition function, which describes all the ways a system can be organized, we can extract information about these properties. They turn up as derivatives with respect to the right parameters of the system.
But the same problem exists. Take a homogeneous gas. It has some temperature. Divide it into two equal portions. Both sides have the same temperature. Divide each half into two equal portions again. All four pieces have the same temperature. Divide again, and again, and a few more times. You eventually get containers with so little gas in them they don’t have a temperature. Where did it go? When did it disappear?
The counterpart to an intensive property is an extensive one. This is stuff like the mass or the volume or the energy of a thing. Cut the gas’s container in two, and each has half the volume. Cut it in half again, and each of the four containers has one-quarter the volume. Keep this up and you stay in uncontroversial territory, because I am not discussing Zeno’s Paradoxes here.
And like Zeno’s Paradoxes, the Sorites Paradox can seem at first trivial. We can distinguish a heap from a non-heap; who cares where the dividing line is? Or whether the division is a gradual change? It seems easy. To show why it is easy is hard. Each potential answer is interesting, and plausible, and when you think hard enough of it, not quite satisfying. Good material to think about.
The first half of last week’s comics offered another bunch of chances to think about what mathematics is for. Before I do get into all that, though, may I mention the most recent update of Gregory Taylor’s serial:
Whether you're an American celebrating Thanksgiving, or simply enjoying the end of a week… there's still a chance to vote on the latest serial entry! https://t.co/ZEMWN4ticK
It does conclude with a vote about the next direction to take. So it’s a good chance for people who like to see authors twisting to their audience’s demands.
Mort Walker and Dik Browne’s Hi and Lois for the 23rd of May, 1961 builds off a major use of arithmetic. Budgeting doesn’t get much attention from mathematicians. I suppose it seems to us like all the basic problems are solved: adding? Subtracting? Multiplication? All familiar things. Especially now with decimal currency. There are great unsolved problems in mathematics, but they get into specialized areas of financial mathematics and just don’t matter for ordinary household budgeting.
Hi comes across a bit harsh here. I’m going to suppose he was taken so by surprise by Lois’s problem that he spoke without thinking.
Scott Hilburn’s The Argyle Sweater for the 19th is the anthropomorphic numerals strip for the week. With the title of “improper fractions” it’s wordplay on the common meaning for a mathematical term. Two times over, come to it. That negative refers to a class of numbers as well as disapproval of something is ordinary enough. I’ve mentioned it, I estimate, 840 times this month alone.
Jokes about the technical and common meanings of “improper” are rarer. In a proper fraction, the numerator is a smaller number than the denominator. In an improper fraction, we don’t count on that. I remember a modest bit of time in elementary and middle school working on converting improper fractions into mixed fractions — a whole number plus a proper fraction. And also don’t remember anyone caring about that after calculus. In most arithmetic work, there’s not much that’s easier about “1 + 1/2” than about “3/2”. The one major convenience “1 + 1/2” has is that it’s easy to tell at a glance how big the number is. It’s not mysterious how big a number 3/2 is, but that’s because of long familiarity. If I asked you whether 54/17 or 46/13 was the larger number, you’d be fairly stumped and maybe cranky. So there’s not much reason to worry about improper fractions while you’re doing work. For the final presentation of an answer, proper or mixed fractions may well be better.
Whoever colored that minus symbol before the 5 screwed up and confused the joke. Syndicated cartoonists give precise coloring instructions for Sunday strips. But many of them don’t, or aren’t able to, give coloring instructions for weekday strips like this. And mistakes like that are the unfortunate result.
Pascal Wyse and Joe Berger’s Berger and Wyse for the 19th features a sudoku appearance. It’s labelled a diversion, and so it is, as many mathematics and logic puzzles will be. The lone commenter at GoComics claims to have solved the puzzle, so I will suppose they’re being honest about this.
Brian Fies’s Mom’s Cancer for the 19th I have mentioned before, although not since I started including images for all mentioned comics. It’s set a moment when treatment for Mom’s cancer has been declared a great success.
The trouble is, as Feis lays out, volume is three-dimensional. We are pretty good at measuring the length, or at least the greatest width of something. You might call that the “characteristic length”. A linear dimension. But volume scales as the cube of this characteristic length. And the sad thing is that 0.8 times 0.8 times 0.8 is, roughly, 0.5. This means that the characteristic length dropping by 20% drops the volume by 50%. Or, as Feis is disappointed to see in this strip and its successor, the great news of a 50% reduction in the turmor’s mass is that it’s just 20% less big in every direction. It doesn’t look like enough.
Bill Holbrook’s On The Fastrack for the 20th presents one of Fi’s seminars about why mathematics is a good thing. The offscreen student’s question about why one should learn mathematics goes unanswered. As often happens the question is presented as though it’s too absurd to deserve answering. The questioner is conflating “mathematics” with “calculating arithmetic”, yes. And a computer will be better at these calculations. A related question, sometimes asked (and rarely on-topic for my essays here), is why one needs to learn any specific facts when a computer is so much better at finding them.
Knowing facts is not understanding them, no. But it is hard to understand a thing without knowing facts. More, without loving the knowing of facts. If we don’t need to be good at calculating, we do still need to know what to have calculated. And why to calculate that instead of something else. In calculating we can learn things of great beauty. And some of us do go on to mathematics which cannot be calculated. There is software that will do very well at computing, say, the indefinite integral of functions. I don’t know of any that will even start on a problem like “find the kernel of this ring”. But these are problems we see, and think interesting, because our experience in arithmetic trains us to notice them. Perhaps there is new interesting mathematics that we would notice if we didn’t have preconceptions set by times tables and long division. But it is hard to believe that we can’t find it because we’re not ignorant enough. I wouldn’t risk it.
Today’s topic is an always rich one. It was suggested by aajohannas, who so far as I know has’t got an active blog or other project. If I’m mistaken please let me know. I’m glad to mention the creative works of people hanging around my blog.
An old Sydney Harris cartoon I probably won’t be able to find a copy of before this publishes. A couple people gather around an old fanfold-paper printer. On the printout is the sequence “1 … 2 … 3 … 4 … 5 … ” The caption: ‘Bizarre sequence of computer-generated random numbers’.
Randomness feels familiar. It feels knowable. It means surprise, unpredictability. The upending of patterns. The obliteration of structure. I imagine there are sociologists who’d say it’s what defines Modernity. It’s hard to avoid noticing that the first great scientific theories that embrace unpredictability — evolution and thermodynamics — came to public awareness at the same time impressionism came to arts, and the subconscious mind came to psychology. It’s grown since then. Quantum mechanics is built on unpredictable specifics. Chaos theory tells us even if we could predict statistics it would do us no good. Randomness feels familiar, even necessary. Even desirable. A certain type of nerd thinks eagerly of the Singularity, the point past which no social interactions are predictable anymore. We live in randomness.
And yet … it is hard to find randomness. At least to be sure we have found it. We might choose between options we find ambivalent by tossing a coin. This seems random. But anyone who was six years old and trying to cheat a sibling knows ways around that. Drop the coin without spinning it, from a half-inch above the table, and you know the outcome, all the way through to the sibling’s punching you. When we’re older and can be made to be better sports we’re fairer about it. We toss the coin and give it a spin. There’s no way we could predict the outcome. Unless we knew just how strong a toss we gave it, and how fast it spun, and how the mass of the coin was distributed. … Really, if we knew enough, our tossed coin would be as predictably as the coin we dropped as a six-year-old. At least unless we tossed in some chaotic way, where each throw would be deterministic, but we couldn’t usefully make a prediction.
Our instinctive idea of what randomness must be is flawed. That shouldn’t surprise. Our instinctive idea of anything is flawed. But randomness gives us trouble. It’s obvious, for example, that randomly selected things should have no pattern. But then how is that reasonable? If we draw letters from the alphabet at random, we should expect sometimes to get some cute pattern like ‘aaaaa’ or ‘qwertyuiop’ or the works of Shakespeare. Perhaps we mean we shouldn’t get patterns any more often than we would expect. All right; how often is that?
We can make tests. Some of them are obvious. Take something that generates possibly-random results. Look up how probable each of those outcomes is. Then run off a bunch of outcomes. Do we get about as many of each result as we should expect? Probability tells us we should get as close as we like to the expected frequency if we let the random process run long enough. If this doesn’t happen, great! We can conclude we don’t really have something random.
We can do more tests. Some of them are brilliantly clever. Suppose there’s a way to order the results. Since mathematicians usually want numbers, putting them in order is easy to do. If they’re not, there’s usually a way to match results to numbers. You’ll see me slide here into talking about random numbers as though that were the same as random results. But if I can distinguish different outcomes, then I can label them. If I can label them, I can use numbers as labels. If the order of the numbers doesn’t matter — should “red” be a 1 or a 2? Should “green” be a 3 or an 8? — then, fine; any order is good.
There are 120 ways to order five distinct things. So generate lots of sets of, say, five numbers. What order are they in? There’s 120 possibilities. Do each of the possibilities turn up as often as expected? If they don’t, great! We can conclude we don’t really have something random.
I can go on. There are many tests which will let us say something isn’t a truly random sequence. They’ll allow for something like Sydney Harris’s peculiar sequence of random numbers. Mostly by supposing that if we let it run long enough the sequence would stop. But these all rule out random number generators. Do we have any that rule them in? That say yes, this generates randomness?
I don’t know of any. I suspect there can’t be any, on the grounds that a test of a thousand or a thousand million or a thousand million quadrillion numbers can’t assure us the generator won’t break down next time we use it. If we knew the algorithm by which the random numbers were generated — oh, but there we’re foiled before we can start. An algorithm is the instructions of how to do a thing. How can an instruction tell us how to do a thing that can’t be predicted?
Algorithms seem, briefly, to offer a way to tell whether we do have a good random sequence, though. We can describe patterns. A strong pattern is easy to describe, the way a familiar story is easy to reference. A weak pattern, a random one, is hard to describe. It’s like a dream, in which you can just list events. So we can call random something which can’t be described any more efficiently than just giving a list of all the results. But how do we know that can’t be done? 7, 7, 2, 4, 5, 3, 8, 5, 0, 9 looks like a pretty good set of digits, whole numbers from 0 through 9. I’ll bet not more than one in ten of you guesses correctly what the next digit in the sequence is. Unless you’ve noticed that these are the digits in the square root of π, so that the next couple digits have to be 0, 5, 5, and 1.
We know, on theoretical grounds, that we have randomness all around us. Quantum mechanics depends on it. If we need truly random numbers we can set a sensor. It will turn the arrival of cosmic rays, or the decay of radioactive atoms, or the sighing of a material flexing in the heat into numbers. We trust we gather these and process them in a way that doesn’t spoil their unpredictability. To what end?
That is, why do we care about randomness? Especially why should mathematicians care? The image of mathematics is that it is a series of logical deductions. That is, things known to be true because they follow from premises known to be true. Where can randomness fit?
One answer, one close to my heart, is called Monte Carlo methods. These are techniques that find approximate answers to questions. They do well when exact answers are too hard for us to find. They use random numbers to approximate answers and, often, to make approximate answers better. This demands computations. The field didn’t really exist before computers, although there are some neat forebears. I mean the Buffon needle problem, which lets you calculate the digits of π about as slowly as you could hope to do.
Another, linked to Monte Carlo methods, is stochastic geometry. “Stochastic” is the word mathematicians attach to things when they feel they’ve said “random” too often, or in an undignified manner. Stochastic geometery is what we can know about shapes when there’s randomness about how the shapes are formed. This sounds like it’d be too weak a subject to study. That it’s built on relatively weak assumptions means it describes things in many fields, though. It can be seen in understanding how forests grow. How to find structures inside images. How to place cell phone towers. Why materials should act like they do instead of some other way. Why galaxies cluster.
There’s also a stochastic calculus, a bit of calculus with randomness added. This is useful for understanding systems where some persistent unpredictable behavior is there. It comes, if I understand the histories of this right, from studying the ways molecules will move around in weird zig-zagging twists. They do this even when there is no overall flow, just a fluid at a fixed temperature. It too has surprising applications. Without the assumption that some prices of things are regularly jostled by arbitrary and unpredictable forces, and the treatment of that by stochastic calculus methods, we wouldn’t have nearly the ability to hedge investments against weird chaotic events. This would be a bad thing, I am told by people with more sophisticated investments than I have. I personally own like ten shares of the Tootsie Roll corporation and am working my way to a $2.00 rebate check from Boyer.
Given that we need randomness, but don’t know how to get it — or at least don’t know how to be sure we have it — what is there to do? We accept our failings and make do with “quasirandom numbers”. We find some process that generates numbers which look about like random numbers should. These have failings. Most important is that if we could predict them. They’re random like “the date Easter will fall on” is random. The date Easter will fall is not at all random; it’s defined by a specific and humanly knowable formula. But if the only information you have is that this year, Easter fell on the 1st of April (Gregorian computus), you don’t have much guidance to whether this coming year it’ll be on the 7th, 14th, or 21st of April the next year. Most notably, quasirandom number generators will tend to repeat after enough numbers are drawn. If we know we won’t need enough numbers to see a repetition, though? Another stereotype of the mathematician is that of a person who demands exactness. It is often more true to say she is looking for an answer good enough. We are usually all right with a merely good enough quasirandomness.
Boyer candies — Mallo Cups, most famously, although I more like the peanut butter Smoothies — come with a cardboard card backing. Each card has two play money “coins”, of values from 5 cents to 50 cents. These can be gathered up for a rebate check or for various prizes. Whether your coin is 5 cents, 10, 25, or 50 cents … well, there’s no way to tell, before you open the package. It’s, so far as you can tell, randomness.
After that busy start last Sunday, Comic Strip Master Command left only a few things for the rest of the week. Here’s everything that seemed worthy of some comment to me:
Alex Hallatt’s Arctic Circle for the 12th is an arithmetic cameo. It’s used as the sort of thing that can be tested, with the straightforward joke about animal testing to follow. It’s not a surprise that machines should be able to do arithmetic. We’ve built machines for centuries to do arithmetic. Literally; Wilhelm Gottfried Leibniz designed and built a calculating machine able to add, subtract, multiply, and divide. This accomplishment from one of the founders of integral calculus is a potent reminder of how much we can accomplish if we’re supposed to be writing instead. (That link is to Robert Benchley’s classic essay “How To Get Things Done”. It is well worth reading, both because it is funny and because it’s actually good, useful advice.)
But it’s also true that animals do know arithmetic. At least a bit. Not — so far as we know — to the point they ponder square roots and such. But certainly to count, to understand addition and subtraction roughly, to have some instinct for calculations. Stanislas Dehaene’s The Number Sense: How the Mind Creates Mathematics is a fascinating book about this. I’m only wary about going deeper into the topic since I don’t know a second (and, better, third) pop book touching on how animals understand mathematics. I feel more comfortable with anything if I’ve encountered it from several different authors. Anyway it does imply the possibility of testing a polar bear’s abilities at arithmetic, only in the real world.
Berkeley Breathed’s Bloom County rerun for the 13th has another mathematics cameo. Geometry’s a subject worthy of stoking Binkley’s anxieties, though. It has a lot of definitions that have to be carefully observed. And while geometry reflects the understanding we have of things from moving around in space, it demands a precision that we don’t really have an instinct for. It’s a lot to worry about.
Terry Border’s Bent Objects for the 15th is our Venn Diagram joke for the week. I like this better than I think the joke deserves, probably because it is done in real materials. (Which is the Bent Objects schtick; it’s always photographs of objects arranged to make the joke.)
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 15th is a joke on knowing how far to travel but not what direction. Normal human conversations carry contextually reasonable suppositions. Told something is two miles away, it’s probably along the major road you’re on, or immediately nearby. I’d still ask for clarification told something was “two miles away”. Two blocks, I’d let slide, on the grounds that it’s no big deal to correct a mistake.
Still, mathematicians carry defaults with them too. They might be open to a weird, general case, certainly. But we have expectations. There’s usually some obvious preferred coordinate system, or directions. If it’s important that we be ready for alternatives we highlight that. We specify the coordinate system we want. Perhaps we specify we’re taking that choice “without loss of generality”, that is, without supposing some other choice would be wrong.
I noticed the mathematician’s customized plate too. “EIPI1” is surely a reference to the expression . That sum, it turns out, equals zero. It reflects this curious connection between exponentiation, complex-valued numbers, and the trigonometric functions. It’s a weird thing to know is true, and it’s highly regarded in certain nerd circles for that weirdness.
Through the end of December my Fall 2018 Mathematics A To Z continues. I’m still open for topics to discuss from the last half-dozen letters of the alphabet. Even if someone’s already given a word for some letter, suggest something anyway. You might inspire me in good ways.
And now it’s my last request for my Fall 2018 mathematics A-To-Z. There’s only a half-dozen letters left, but nto to fear: they include letters with no end of potential topics, like, ‘X’.
If you have any mathematical topics with a name that starts U through Z that you’d like to see me write about, please say so. I’m happy to write what I fully mean to be a tight 500 words about the subject and then find I’ve put up my second 1800-word essay of the week. I usually go by a first-come, first-serve basis for each letter. But I will vary that if I realize one of the alternatives is more suggestive of a good essay topic. And I may use a synonym or an alternate phrasing if both topics for a particular letter interest me. This might be the only way to get a good ‘X’ letter.
Also when you do make a request, please feel free to mention your blog, Twitter feed, YouTube channel, Mathstodon account, or any other project of yours that readers might find interesting. I’m happy to throw in a mention as I get to the word of the day.
So! I’m open for nominations. Here are the words I’ve used in past A to Z sequences. I probably don’t want to revisit them. But I will think over, if I get a request, whether I might have new opinions.
I’m planning this week to open up the end of the alphabet — and the year — to topic suggestions. So there’s no need to panic about that.
The Quadratic Equation is the tool humanity used to discover mathematics. Yes, I exaggerate a bit. But it touches a stunning array of important things. It is most noteworthy because of the time I impressed by several-levels-removed boss at the summer job I had while an undergraduate. He had been stumped by a data-optimization problem for weeks. I noticed it was just a quadratic equation, that’s easy to solve. He was, must be said, overly impressed. I would go on to grad school where I was once stymied for a week because I couldn’t find the derivative of correctly. It is, correctly, . So I have sympathy for my remote supervisor.
We normally write the Quadratic Equation in one of two forms:
The first form is great when you are first learning about polynomials, and parabolas. And you’re content to something raised to the second power. The second form is great when you are learning advanced stuff about polynomials. Then you start wanting to know things true about polynomials that go up to arbitrarily high powers. And we always want to know about polynomials. The subscripts under mean we can’t run out of letters to be coefficients. Setting the subscripts and powers to keep increasing lets us write this out neatly.
We don’t have to use x. We never do. But we mostly use x. Maybe t, if we’re writing an equation that describes something changing with time. Maybe z, if we want to emphasize how complex-valued numbers might enter into things. The name of the independent variable doesn’t matter. But stick to the obvious choices. If you’re going to make the variable ‘f’ you better have a good reason.
The equation is very old. We have ancient Babylonian clay tablets which describe it. Well, not the quadratic equation as we write it. The oldest problems put it as finding numbers that simultaneously solve two equations, one of them a sum and one of them a product. Changing one equation into two is a venerable mathematical process. It often makes problems simpler. We do this all the time in Ordinary Differential Equations. I doubt there is a direct connection between Ordinary Differential Equations and this alternate form of the Quadratic Equation. But it is a reminder that the ways we express mathematical problems are our conventions. We can rewrite problems to make our lives easier, to make answers clearer. We should look for chances to do that.
It weaves into everything. Some things seem obvious. Suppose the coefficients — a, b, and c; or if you’d rather — are all real-valued numbers. Then the quadratic equation has to hav two solutions. There can be two real-valued solutions. There can be one real-valued solution, counted twice for reasons that make sense but are too much a digression for me to justify here. There can be two complex-valued solutions. We can infer the usefulness of imaginary and complex-valued numbers by finding solutions to the quadratic equation.
(The quadratic equation is a great introduction complex-valued numbers. It’s not how mathematicians came to them. Complex-valued numbers looked like obvious nonsense. They corresponded to there being no real-valued answers. A formula that gives obvious nonsense when there’s no answer is great. It’s formulas that give subtle nonsense when there’s no answer that are dangerous. But similar-in-design formulas for cubic and quartic polynomials could use complex-valued numbers in intermediate steps. Plunging ahead as though these complex-valued numbers were proper would get to the real-valued answers. This made the argument that complex-valued numbers should be taken seriously.)
We learn useful things right away from trying to solve it. We teach students to “complete the square” as a first approach to solving it. Completing the square is not that useful by itself: a few pages later in the textbook we get to the quadratic formula and that has every quadratic equation solved. Just plug numbers into the formula. But completing the square teaches something more useful than just how to solve an equation. It’s a method in which we solve a problem by saying, you know, this would be easy to solve if only it were different. And then thinking how to change it into a different-looking problem with the same solutions. This is brilliant work. A mathematician is imagined to have all sorts of brilliant ideas on how to solve problems. Closer to to the truth is that she’s learned all sorts of brilliant ways to make a problem more like one she already knows how to solve. (This is the nugget of truth which makes one genre of mathematical jokes. These jokes have the punch line, “the mathematician declares, `this is a problem already solved’ and goes back to sleep.”)
Stare at the solutions of the quadratic equation. You will find patterns. Suppose the coefficients are all real numbers. Then there are some numbers that can be solutions: 0, 1, square root of 15, -3.5, these can all turn up. There are some numbers that can’t be. π. e. The tangent of 2. It’s not just a division between rational and irrational numbers. There are different kinds of irrational numbers. This — alongside looking at other polynomials — leads us to transcendental numbers.
Keep staring at the two solutions of the quadratic equation. You’ll notice the sum of the solutions is . You’ll notice the product of the two solutions is . You’ll glance back at those ancient Babylonian tablets. This seems interesting, but little more than that. It’s a lead, though. Similar formulas exist for the sum of the solutions for a cubic, for a quartic, for other polynomials. Also for the sum of products of pairs of these solutions. Or the sum of products of triplets of these solutions. Or the product of all these solutions. These are known as Vieta’s Formulas, after the 16th-century mathematician François Viète. (This by way of his Latinized, academic’sona, name, Franciscus Vieta.) This gives us a way to rewrite the original polynomial as a set of polynomials in several variables. What’s interesting is the set of polynomials have symmetries. They all look like, oh, “xy + yz + zx”. No one variable gets used in a way distinguishable from the others.
This leads us to group theory. The coefficients start out in a ring. The quotients from these Vieta’s Formulas give us an “extension” of the ring. An extension is roughly what the common use of the word suggests. It takes the ring and builds from it a bigger thing that satisfies some nice interesting rules. And it leads us to surprises. The ancient Greeks had several challenges to be done with only straightedge and compass. One was to make a cube double the volume of a given cube. It’s impossible to do, with these tools. (Even ignoring the question of what we would draw on.) Another was to trisect any arbitrary angle; it turns out, there are angles it’s just impossible. The group theory derived, in part, from this tells us why. One more impossibility: drawing a square that has exactly the same area as a given circle.
But there are possible things still. Step back from the quadratic equation, that bit. Make a function, instead, something that matches numbers (real, complex, what have you) to numbers (the same). Its rule: any x in the domain matches to the number in the range. We can make a picture that represents this. Set Cartesian coordinates — the x and y coordinates that people think of as the default — on a surface. Then highlight all the points with coordinates (x, y) which make true the equation . This traces out a particular shape, the parabola.
Draw a line that crosses this parabola twice. There’s now one fully-enclosed piece of the surface. How much area is enclosed there? It’s possible to find a triangle with area three-quarters that of the enclosed part. It’s easy to use straightedge and compass to draw a square the same area as a given triangle. Showing the enclosed area is four-thirds the triangle’s area? That can … kind of … be done by straightedge and compass. It takes infinitely many steps to do this. But if you’re willing to allow a process to go on forever? And you show that the process would reach some fixed, knowable answer? This could be done by the ancient Greeks; indeed, it was. Aristotle used this as an example of the method of exhaustion. It’s one of the ideas that reaches toward integral calculus.
This has been a lot of exact, “analytic” results. There are neat numerical results too. Vieta’s formulas, for example, give us good ways to find approximate solutions of the quadratic equation. They work well if one solution is much bigger than the other. Numerical methods for finding solutions tend to work better if you can start from a decent estimate of the answer. And you can learn of numerical stability, and the need for it, studying these.
Numerical calculations have a problem. We have a set number of decimal places with which to work. What happens if we need a calculation that takes more decimal places than we’re given to do perfectly? Here’s a toy version: two-thirds is the number 0.6666. Or 0.6667. Already we’re in trouble. What is three times two-thirds? We’re going to get either 1.9998 or 2.0001 and either way something’s wrong. The wrongness looks small. But any formula you want to use has some numbers that will turn these small errors into big ones. So numerical stability is, in fairness, not something unique to the quadratic equation. It is something you learn if you study the numerics of the equation deeply enough.
I’m also delighted to learn, through Wikipedia, that there’s a prosthaphaeretic method for solving the quadratic equation. Prosthaphaeretic methods use trigonometric functions and identities to rewrite problems. You might call it madness to rely on arctangents and half-angle formulas and such instead of, oh, doing a division or taking a square root. This is because you have calculators. But if you don’t? If you have to do all that work by hand? That’s terrible. But if someone has already prepared a table listing the sines and cosines and tangents of a great variety of angles? They did a great many calculations already. You just need to pick out the one that tells you what you hope to know. I’ll spare you the steps of solving the quadratic equation using trig tables. Wikipedia describes it fine enough.
So you see how much mathematics this connects to. It’s a bit of question-begging to call it that important. As I said, we’ve known the quadratic equation for a long time. We’ve thought about it for a long while. It would be surprising if we didn’t find many and deep links to other things. Even if it didn’t have links, we would try to understand new mathematical tools in terms of how they affect familiar old problems like this. But these are some of the things which we’ve found, and which run through much of what we understand mathematics to be.
There were just enough mathematically-themed comic strips last week to make two editions for this coming week. All going well I’ll run the other half on either Wednesday or Thursday. There is a point that isn’t quite well, which is that one of the comics is in dubious taste. I’ll put that at the end, behind a more specific content warning. In the meanwhile, you can find this and hundreds of other Reading the Comics posts at this link.
Thaves’s Frank and Ernest for the 11th is wordplay, built on the conflation of “negative” as in numbers and “negative” as in bad. I’m not sure the two meanings are unrelated. The word ‘negative’ itself derives from the Latin word meaning to deny, which sounds bad. It’s easy to see why the term would attach to what we call negative numbers. A number plus its negation leaves us zero, a nothing. But it does make the negative numbers sound like bad things to have around, or to have to deal with. The convention that a negative number is less than zero implies that the default choice for a number is one greater than zero. And the default choice is usually seen as the good one, with everything else a falling-away. Still, -7 is as legitimate a number as 7 is; it’s we who think one is better than another.
J C Duffy’s Lug Nuts for the 11th has the Dadaist panel present prime numbers as a way to communicate. I suspect Duffy’s drawing from speculations about how to contact alien intelligences. One problem with communicating with the truly alien is how to recognize there is a message being sent. A message too regular will look like a natural process, one conveying no more intelligence than the brightness which comes to most places at dawn and darkness coming at sunset. A message too information-packed, peculiarly, looks like random noise. We need an intermediate level. A signal that it’s easy to receive, and that is too hard to produce by natural processes.
Prime numbers seem like a good compromise. An intelligence that understands arithmetic will surely notice prime numbers, or at least work out quickly what’s novel about this set of numbers once given them. And it’s hard to imagine an intelligence capable of sending or receiving interplanetary signals that doesn’t understand arithmetic. (Admitting that yes, we might be ruling out conversational partners by doing this.) We can imagine a natural process that sends out (say) three pulses and then rests, or five pulses and rests. Or even draws out longer cycles: two pulses and a rest, three pulses and a rest five pulses and a rest, and then a big rest before restarting the cycle. But the longer the string of prime numbers, the harder it is to figure a natural process that happens to hit them and not other numbers.
We think, anyway. Until we contact aliens we won’t really know what it’s likely alien contact would be like. Prime numbers seem good to us, but — even if we stick to numbers — there’s no reason triangular numbers, square numbers, or perfect numbers might not be as good. (Well, maybe not perfect numbers; there aren’t many of them, and they grow very large very fast.) But we have to look for something particular, and this seems like a plausible particularity.
Charles Schulz’s Peanuts Begins for the 11th is an early strip, from the days when Lucy would look to Charlie Brown for information. And it’s a joke built on conflating ‘zero’ with ‘nothing’. Lucy’s right that zero times zero has to be something. That’s how multiplication works. That the number zero is something? That’s a tricky concept. I think being mathematically adept can blind one to how weird that is. If you’re used to how zero is the amount of a thing you have to have nothing of that thing, then we start to see what’s weird about it.
But I’m not sure the strip quite sets that up well. I think if Charlie Brown had answered that zero times zero was “nothing” it would have been right (or right enough) and Lucy’s exasperation would have flowed more naturally. As it is? She must know that zero is “nothing”; but then why would she figure “nothing times nothing” has to be something? Maybe not; it would have left Charlie Brown less reason to feel exasperated or for the reader to feel on Charlie Brown’s side. Young Lucy’s leap to “three” needs to be at least a bit illogical to make any sense.
Now to the last strip and the one I wanted to warn about. It alludes to gun violence and school shootings. If you don’t want to deal with that, you’re right. There’s other comic strips to read out there. And this for a comic that ran on the centennial of Armistice Day, which has to just be an oversight in scheduling the (non-plot-dependent) comic.