Reading the Comics, June 26, 2022: First Doldrums of Summer Edition


I have not kept secret that I’ve had little energy lately. I hope that’s changing but can do little more than hope. I find it strange that my lack of energy seems to be matched by Comic Strip Master Command. Last week saw pretty slim pickings for mathematically-themed comics. Here’s what seems worth the sharing from my reading.

Lincoln Peirce’s Big Nate for the 22nd is a Pi Day joke, displaced to the prank day at the end of Nate’s school year. It’s also got a surprising number of people in the comments complaining that 3.1416 is only an approximation to π. It is, certainly, but so is any representation besides π or a similar mathematical expression. And introducing it with 3.1416 gives the reader the hint that this is about a mathematics expression and not an arbitrary symbol. It’s important to the joke that this be communicated clearly, and it’s hard to think of better ways to do that.

Teacher: 'What's this? One of you left a card on my desk?' He picks it up. 'Hm. All it says is 3.1416! That's pi!' A clown leaps on-panel and shoves a pie into the teacher's face: 'Did somebody say PIE?' The class breaks up in laughter. Francis leans over, 'How'd you find the clown?' Big Nate says, 'I held auditions.'
Lincoln Peirce’s Big Nate for the 22nd of June, 2022. This and other essays mentioning something from Big Nate are at this link.

Dave Whamond’s Reality Check for the 24th is another in the line of “why teach algebra instead of something useful” strips. There are several responses. One is that certainly one should learn how to do a household budget; this was, at least back in the day, called home economics, and it was a pretty clear use of mathematics. Another is that a good education is about becoming literate in all the great thinking of humanity: you should come out knowing at least something coherent about mathematics and literature and exercise and biology and music and visual arts and more. Schools often fail to do all of this — how could they not? — but that’s not reason to fault them on parts of the education that they do. And anther is that algebra is about getting comfortable working with numbers before you know just what they are. That is, how to work out ways to describe a thing you want to know, and then to find what number (or range of numbers) that is. Still, these responses hardly matter. Mathematics has always lived in a twin space, of being both very practical and very abstract. People have always and will always complain that students don’t learn how to do the practical well enough. There’s not much changing that.

Teacher pointing to the quadratic formula on the whiteboard: 'Algebra!' Student: 'Why don't you teach me something more useful, that I will actually use in life? Like, oh, I don't know, how to do a household budget?' Silent panel. The teacher points to the quadratic formula again; 'Algebra!'
Dave Whamond’s Reality Check for the 24th of June, 2022. This and the many other comic strips mentioning Reality Check are available at this link.

Charles Schulz’s Peanuts Begins for the 26th sees Violet challenge Charlie Brown to say what a non-perfect circle would be. I suppose this makes the comic more suitable for a philosophy of language blog, but I don’t know any. To be a circle requires meeting a particular definition. None of the things we ever point to and call circles meets that. We don’t generally have trouble connecting our imperfect representations of circles to the “perfect” ideal, though. And Charlie Brown said something meaningful in describing his drawing as being “a perfect circle”. It’s tricky pinning down exactly what it is, though.

Charlie Brown points to a circle he's drawn on a fence: 'How's that? A *perfect* circle!' Violet looks it over: 'Uh huh ... what other kind of circles are there?' Charlie Brown is left silent by this.
Charles Schulz’s Peanuts Begins for the 26th of June, 2022. The strip originally ran the 29th of June, 1954. (The regular Peanuts feed offers comics from the late 1950s through the mid-70s. Peanuts Begins offers comics from the early 1950s.) Essays with some mention of Peanuts or the Peanuts Begins repeats are at this link.

And that is as much as last week moved me to write. This and my other Reading the Comics posts should be at this link. We’ll see whether the upcoming week picks up any.

Reading the Comics, May 7, 2022: Does Comic Strip Master Command Not Do Mathematics Anymore Edition?


I mentioned in my last Reading the Comics post that it seems there are fewer mathematics-themed comic strips than there used to be. I know part of this is I’m trying to be more stringent. You don’t need me to say every time there’s a Roman numerals joke or that blackboards get mathematics symbols put on them. Still, it does feel like there’s fewer candidate strips. Maybe the end of the 2010s was a boom time for comic strips aimed at high school teachers and I only now appreciate that? Only further installments of this feature will let us know.

Jim Benton’s Jim Benton Cartoons for the 18th of April, 2022 suggests an origin for those famous overlapping circle pictures. This did get me curious what’s known about how John Venn came to draw overlapping circles. There’s no reason he couldn’t have used triangles or rectangles or any shape, after all. It looks like the answer is nobody really knows.

Venn, himself, didn’t name the diagrams after himself. Wikipedia credits Charles Dodgson (Lewis Carroll) as describing “Venn’s Method of Diagrams” in 1896. Clarence Irving Lewis, in 1918, seems to be the first person to write “Venn Diagram”. Venn wrote of them as “Eulerian Circles”, referencing the Leonhard Euler who just did everything. Sir William Hamilton — the philosopher, not the quaternions guy — posthumously published the Lectures On Metaphysics and Logic which used circles in these diagrams. Hamilton asserted, correctly, that you could use these to represent logical syllogisms. He wrote that the 1712 logic text Nucleus Logicae Weisianae — predating Euler — used circles, and was right about that. He got the author wrong, crediting Christian Weise instead of the correct author, Johann Christian Lange.

John Venn, as a father, complaining: 'Why can't you brats pick up your HULA HOOPS when you're done playing with ... hang on. Wait a sec ... ' He's looking at three circles of about the same size, overlapping as a three-set Venn diagram. Caption: 'One day at the Venn House.'
Jim Benton’s Jim Benton Cartoons for the 18th of April, 2022. Although I didn’t have a tag for Jim Benton cartoons before I have discussed them a couple times. Future essays mentioning Jim Benton Cartoons should be at this link.

With 1712 the trail seems to end to this lay person doing a short essay’s worth of research. I don’t know what inspired Lange to try circles instead of any other shape. My guess, unburdened by evidence, is that it’s easy to draw circles, especially back in the days when every mathematician had a compass. I assume they weren’t too hard to typeset, at least compared to the many other shapes available. And you don’t need to even think about setting them with a rotation, the way a triangle or a pentagon might demand. But I also would not rule out a notion that circles have some connotation of perfection, in having infinite axes of symmetry and all points on them being equal in distance from the center and such. Might be the reasons fit in the intersection of the ethereal and the mundane.

Title: 'Physics hypotheses that are still on the table.' One is the No-Boundary Proposal, represented with a wireframe geodesic of an open cup. Another is The Weyl Curvature, represented with a wireframe model of a pointed ellipsoid. The punch line is The Victoria Principle, a small pile of beauty-care products.
Daniel Beyer’s Long Story Short for the 29th of April, 2022. This and other essays mentioning Long Story Short should be at this link.

Daniel Beyer’s Long Story Short for the 29th of April, 2022 puts out a couple of concepts from mathematical physics. These are all about geometry, which we now see as key to understanding physics. Particularly cosmology. The no-boundary proposal is a model constructed by James Hartle and Stephen Hawking. It’s about the first 10^{-43} seconds of the universe after the Big Bang. This is an era that was so hot that all our well-tested models of physical law break down. The salient part of the Hartle-Hawking proposal is the idea that in this epoch time becomes indistinguishable from space. If I follow it — do not rely on my understanding for your thesis defense — it’s kind of the way that stepping away from the North Pole first creates the ideas of north and south and east and west. It’s very hard to think of a way to test this which would differentiate it from other hypotheses about the first instances of the universe.

The Weyl Curvature is a less hypothetical construct. It’s a tensor, one of many interesting to physicists. This one represents the tidal forces on a body that’s moving along a geodesic. So, for example, how the moon of a planet gets distorted over its orbit. The Weyl Curvature also offers a way to describe how gravitational waves pass through vacuum. I’m not aware of any serious question of the usefulness or relevance of the thing. But the joke doesn’t work without at least two real physics constructs as setup.

Orange imp, speaking to a blue imp: 'What are you doing? Blue imp, who's sitting in the air, floating: 'I'm using my powers to make math work.' Orange: 'What?' Blue: 'If I lose my concentration, math stops working.' Blue falls over, crying, 'Oops!' Blue picks self up off the ground and says, 'There! Are all nineteen of you happy now?'
Liniers’ Macanudo for the 5th of May, 2022. Essays about some topic mentioned in Macanudo should be at this link.

Liniers’ Macanudo for the 5th of May, 2022 has one of the imps who inhabit the comic asserting responsibility for making mathematics work. It’s difficult to imagine what a creature could do to make mathematics work, or to not work. If pressed, we would say mathematics is the set of things we’re confident we could prove according to a small, pretty solid-seeming set of logical laws. And a somewhat larger set of axioms and definitions. (Few of these are proved completely, but that’s because it would involve a lot of fiddly boring steps that nobody doubts we could do if we had to. If this sounds sketchy, consider: do you believe my claim that I could alphabetize the books on the shelf to my right, even though I’ve never done that specific task? Why?) It would be like making a word-search puzzle not work.

The punch line, the blue imp counting seventeen of the orange imp, suggest what this might mean. Mathematics as a set of statements following some rule, is a niche interest. What we like is how so many mathematical things seem to correspond to real-world things. We can imagine mathematics breaking that connection to the real world. The high temperature rising one degree each day this week may tell us something about this weekend, but it’s useless for telling us about November. So I can imagine a magical creature deciding what mathematical models still correspond to the thing they model. Be careful in trying to change their mind.


And that’s as many comic strips from the last several weeks that I think merit discussion. All of my Reading the Comics posts should be at this link, though. And I hope to have a new one again sometime soon. I’ll ask my contacts with the cartoonists. I have about half of a contact.

From my Sixth A-to-Z: Zeno’s Paradoxes


I suspect it is impossible to say enough about Zeno’s Paradoxes. To close out my 2019 A-to-Z, though, I tried saying something. There are four particularly famous paradoxes and I discuss what are maybe the second and third-most-popular ones here. (The paradox of the Dichotomy is surely most famous.) The problems presented are about motion and may seem to be about physics, or at least about perception. But calculus is built on differentials, on the idea that we can describe how fast a thing is changing at an instant. Mathematicians have worked out a way to define this that we’re satisfied with and that doesn’t require (obvious) nonsense. But to claim we’ve solved Zeno’s Paradoxes — as unwary STEM majors sometimes do — is unwarranted.

Also I was able to work in a picture from an amusement park trip I took, the closing weekend of Kings Island park in 2019 and the last day that The Vortex roller coaster would run.


Today’s A To Z term was nominated by Dina Yagodich, who runs a YouTube channel with a host of mathematics topics. Zeno’s Paradoxes exist in the intersection of mathematics and philosophy. Mathematics majors like to declare that they’re all easy. The Ancient Greeks didn’t understand infinite series or infinitesimals like we do. Now they’re no challenge at all. This reflects a belief that philosophers must be silly people who haven’t noticed that one can, say, exit a room.

This is your classic STEM-attitude of missing the point. We may suppose that Zeno of Elea occasionally exited rooms himself. That is a supposition, though. Zeno, like most philosophers who lived before Socrates, we know from other philosophers making fun of him a century after he died. Or at least trying to explain what they thought he was on about. Modern philosophers are expected to present others’ arguments as well and as strongly as possible. This even — especially — when describing an argument they want to say is the stupidest thing they ever heard. Or, to use the lingo, when they wish to refute it. Ancient philosophers had no such compulsion. They did not mind presenting someone else’s argument sketchily, if they supposed everyone already knew it. Or even badly, if they wanted to make the other philosopher sound ridiculous. Between that and the sparse nature of the record, we have to guess a bit about what Zeno precisely said and what he meant. This is all right. We have some idea of things that might reasonably have bothered Zeno.

And they have bothered philosophers for thousands of years. They are about change. The ones I mean to discuss here are particularly about motion. And there are things we do not understand about change. This essay will not answer what we don’t understand. But it will, I hope, show something about why that’s still an interesting thing to ponder.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Zeno’s Paradoxes.

When we capture a moment by photographing it we add lies to what we see. We impose a frame on its contents, discarding what is off-frame. We rip an instant out of its context. And that before considering how we stage photographs, making people smile and stop tilting their heads. We forgive many of these lies. The things excluded from or the moments around the one photographed might not alter what the photograph represents. Making everyone smile can convey the emotional average of the event in a way that no individual moment represents. Arranging people to stand in frame can convey the participation in the way a candid photograph would not.

But there remains the lie that a photograph is “a moment”. It is no such thing. We notice this when the photograph is blurred. It records all the light passing through the lens while the shutter is open. A photograph records an eighth of a second. A thirtieth of a second. A thousandth of a second. But still, some time. There is always the ghost of motion in a picture. If we do not see it, it is because our photograph’s resolution is too coarse. If we could photograph something with infinite fidelity we would see, even in still life, the wobbling of the molecules that make up a thing.

A photograph of a blurry roller coaster passing through a vertical loop.
One of the many loops of Vortex, a roller coaster at Kings Island amusement park from 1987 to 2019. Taken by me the last day of the ride’s operation; this was one of the roller coaster’s runs after 7 pm, the close of the park the last day of the season.

Which implies something fascinating to me. Think of a reel of film. Here I mean old-school pre-digital film, the thing that’s a great strip of pictures, a new one shown 24 times per second. Each frame of film is a photograph, recording some split-second of time. How much time is actually in a film, then? How long, cumulatively, was a camera shutter open during a two-hour film? I use pre-digital, strip-of-film movies for convenience. Digital films offer the same questions, but with different technical points. And I do not want the writing burden of describing both analog and digital film technologies. So I will stick to the long sequence of analog photographs model.

Let me imagine a movie. One of an ordinary everyday event; an actuality, to use the terminology of 1898. A person overtaking a walking tortoise. Look at the strip of film. There are many frames which show the person behind the tortoise. There are many frames showing the person ahead of the tortoise. When are the person and the tortoise at the same spot?

We have to put in some definitions. Fine; do that. Say we mean when the leading edge of the person’s nose overtakes the leading edge of the tortoise’s, as viewed from our camera. Or, since there must be blur, when the center of the blur of the person’s nose overtakes the center of the blur of the tortoise’s nose.

Do we have the frame when that moment happened? I’m sure we have frames from the moments before, and frames from the moments after. But the exact moment? Are you positive? If we zoomed in, would it actually show the person is a millimeter behind the tortoise? That the person is a hundredth of a millimeter ahead? A thousandth of a hair’s width behind? Suppose that our camera is very good. It can take frames representing as small a time as we need. Does it ever capture that precise moment? To the point that we know, no, it’s not the case that the tortoise is one-trillionth the width of a hydrogen atom ahead of the person?

If we can’t show the frame where this overtaking happened, then how do we know it happened? To put it in terms a STEM major will respect, how can we credit a thing we have not observed with happening? … Yes, we can suppose it happened if we suppose continuity in space and time. Then it follows from the intermediate value theorem. But then we are begging the question. We impose the assumption that there is a moment of overtaking. This does not prove that the moment exists.

Fine, then. What if time is not continuous? If there is a smallest moment of time? … If there is, then, we can imagine a frame of film that photographs only that one moment. So let’s look at its footage.

One thing stands out. There’s finally no blur in the picture. There can’t be; there’s no time during which to move. We might not catch the moment that the person overtakes the tortoise. It could “happen” in-between moments. But at least we have a moment to observe at leisure.

So … what is the difference between a picture of the person overtaking the tortoise, and a picture of the person and the tortoise standing still? A movie of the two walking should be different from a movie of the two pretending to be department store mannequins. What, in this frame, is the difference? If there is no observable difference, how does the universe tell whether, next instant, these two should have moved or not?

A mathematical physicist may toss in an answer. Our photograph is only of positions. We should also track momentum. Momentum carries within it the information of how position changes over time. We can’t photograph momentum, not without getting blurs. But analytically? If we interpret a photograph as “really” tracking the positions of a bunch of particles? To the mathematical physicist, momentum is as good a variable as position, and it’s as measurable. We can imagine a hyperspace photograph that gives us an image of positions and momentums. So, STEM types show up the philosophers finally, right?

Hold on. Let’s allow that somehow we get changes in position from the momentum of something. Hold off worrying about how momentum gets into position. Where does a change in momentum come from? In the mathematical physics problems we can do, the change in momentum has a value that depends on position. In the mathematical physics problems we have to deal with, the change in momentum has a value that depends on position and momentum. But that value? Put it in words. That value is the change in momentum. It has the same relationship to acceleration that momentum has to velocity. For want of a real term, I’ll call it acceleration. We need more variables. An even more hyperspatial film camera.

… And does acceleration change? Where does that change come from? That is going to demand another variable, the change-in-acceleration. (The “jerk”, according to people who want to tell you that “jerk” is a commonly used term for the change-in-acceleration, and no one else.) And the change-in-change-in-acceleration. Change-in-change-in-change-in-acceleration. We have to invoke an infinite regression of new variables. We got here because we wanted to suppose it wasn’t possible to divide a span of time infinitely many times. This seems like a lot to build into the universe to distinguish a person walking past a tortoise from a person standing near a tortoise. And then we still must admit not knowing how one variable propagates into another. That a person is wide is not usually enough explanation of how they are growing taller.

Numerical integration can model this kind of system with time divided into discrete chunks. It teaches us some ways that this can make logical sense. It also shows us that our projections will (generally) be wrong. At least unless we do things like have an infinite number of steps of time factor into each projection of the next timestep. Or use the forecast of future timesteps to correct the current one. Maybe use both. These are … not impossible. But being “ … not impossible” is not to say satisfying. (We allow numerical integration to be wrong by quantifying just how wrong it is. We call this an “error”, and have techniques that we can use to keep the error within some tolerated margin.)

So where has the movement happened? The original scene had movement to it. The movie seems to represent that movement. But that movement doesn’t seem to be in any frame of the movie. Where did it come from?

We can have properties that appear in a mass which don’t appear in any component piece. No molecule of a substance has a color, but a big enough mass does. No atom of iron is ferromagnetic, but a chunk might be. No grain of sand is a heap, but enough of them are. The Ancient Greeks knew this; we call it the Sorites paradox, after Eubulides of Miletus. (“Sorites” means “heap”, as in heap of sand. But if you had to bluff through a conversation about ancient Greek philosophers you could probably get away with making up a quote you credit to Sorites.) Could movement be, in the term mathematical physicists use, an intensive property? But intensive properties are obvious to the outside observer of a thing. We are not outside observers to the universe. It’s not clear what it would mean for there to be an outside observer to the universe. Even if there were, what space and time are they observing in? And aren’t their space and their time and their observations vulnerable to the same questions? We’re in danger of insisting on an infinite regression of “universes” just so a person can walk past a tortoise in ours.

We can say where movement comes from when we watch a movie. It is a trick of perception. Our eyes take some time to understand a new image. Our brains insist on forming a continuous whole story even out of disjoint ideas. Our memory fools us into remembering a continuous line of action. That a movie moves is entirely an illusion.

You see the implication here. Surely Zeno was not trying to lead us to understand all motion, in the real world, as an illusion? … Zeno seems to have been trying to support the work of Parmenides of Elea. Parmenides is another pre-Socratic philosopher. So we have about four words that we’re fairly sure he authored, and we’re not positive what order to put them in. Parmenides was arguing about the nature of reality, and what it means for a thing to come into or pass out of existence. He seems to have been arguing something like that there was a true reality that’s necessary and timeless and changeless. And there’s an apparent reality, the thing our senses observe. And in our sensing, we add lies which make things like change seem to happen. (Do not use this to get through your PhD defense in philosophy. I’m not sure I’d use it to get through your Intro to Ancient Greek Philosophy quiz.) That what we perceive as movement is not what is “really” going on is, at least, imaginable. So it is worth asking questions about what we mean for something to move. What difference there is between our intuitive understanding of movement and what logic says should happen.

(I know someone wishes to throw down the word Quantum. Quantum mechanics is a powerful tool for describing how many things behave. It implies limits on what we can simultaneously know about the position and the time of a thing. But there is a difference between “what time is” and “what we can know about a thing’s coordinates in time”. Quantum mechanics speaks more to the latter. There are also people who would like to say Relativity. Relativity, special and general, implies we should look at space and time as a unified set. But this does not change our questions about continuity of time or space, or where to find movement in both.)

And this is why we are likely never to finish pondering Zeno’s Paradoxes. In this essay I’ve only discussed two of them: Achilles and the Tortoise, and The Arrow. There are two other particularly famous ones: the Dichotomy, and the Stadium. The Dichotomy is the one about how to get somewhere, you have to get halfway there. But to get halfway there, you have to get a quarter of the way there. And an eighth of the way there, and so on. The Stadium is the hardest of the four great paradoxes to explain. This is in part because the earliest writings we have about it don’t make clear what Zeno was trying to get at. I can think of something which seems consistent with what’s described, and contrary-to-intuition enough to be interesting. I’m satisfied to ponder that one. But other people may have different ideas of what the paradox should be.

There are a handful of other paradoxes which don’t get so much love, although one of them is another version of the Sorites Paradox. Some of them the Stanford Encyclopedia of Philosophy dubs “paradoxes of plurality”. These ask how many things there could be. It’s hard to judge just what he was getting at with this. We know that one argument had three parts, and only two of them survive. Trying to fill in that gap is a challenge. We want to fill in the argument we would make, projecting from our modern idea of this plurality. It’s not Zeno’s idea, though, and we can’t know how close our projection is.

I don’t have the space to make a thematically coherent essay describing these all, though. The set of paradoxes have demanded thought, even just to come up with a reason to think they don’t demand thought, for thousands of years. We will, perhaps, have to keep trying again to fully understand what it is we don’t understand.


And with that — I find it hard to believe — I am done with the alphabet! All of the Fall 2019 A-to-Z essays should appear at this link. Additionally, the A-to-Z sequences of this and past years should be at this link. Tomorrow and Saturday I hope to bring up some mentions of specific past A-to-Z essays. Next week I hope to share my typical thoughts about what this experience has taught me, and some other writing about this writing.

Thank you, all who’ve been reading, and who’ve offered topics, comments on the material, or questions about things I was hoping readers wouldn’t notice I was shorting. I’ll probably do this again next year, after I’ve had some chance to rest.

History of Philosophy podcast discusses Girolamo Cardano


I’m very slightly sorry to bump other things. But folks who like the history of mathematics, and how it links to other things, and who also like listening to stuff, might want to know. Peter Adamson, host of the History Of Philosophy Without Any Gaps podcast, this week talked for about twenty minutes about Girolamo Cardano.

Cardano is famous in mathematics circles for early work in probability. And, more, for pioneering the use of imaginary numbers. This along the way to a fantastic controversy about credit, and discovery, and secrets, and self-promotion.

Cardano was, as Adamson notes, a polymath; his day job was as a physician and he poked around in the philosophy of mind. That’s what makes him a fit subject for Adamson’s project. So if you’d like a different perspective on a person known, if vaguely, to many mathematics folks, and have a spot of time, you might enjoy.

My All 2020 Mathematics A to Z: Gottfried Wilhelm Leibniz


Today’s topic suggestion was suggested by bunnydoe. I know of a project bunnydoe runs, but not whether it should be publicized. It is another biographical piece. Biographies and complex numbers, that seems to be the theme of this year.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Gottfried Wilhelm Leibniz.

The exact suggestion I got for L was “Leibniz, the inventor of Calculus”. I can’t in good conscience offer that. This isn’t to deny Leibniz’s critical role in calculus. We rely on many of the ideas he’d had for it. We especially use his notation. But there are few great big ideas that can be truly credited to an inventor, or even a team of inventors. Put aside the sorry and embarrassing priority dispute with Isaac Newton. Many mathematicians in the 16th and 17th century were working on how to improve the Archimedean “method of exhaustion”. This would find the areas inside select curves, integral calculus. Johannes Kepler worked out the areas of ellipse slices, albeit with considerable luck. Gilles Roberval tried working out the area inside a curve as the area of infinitely many narrow rectangular strips. We still learn integration from this. Pierre de Fermat recognized how tangents to a curve could find maximums and minimums of functions. This is a critical piece of differential calculus. Isaac Barrow, Evangelista Torricelli (of barometer fame), Pietro Mengoli, and Stephano Angeli all pushed mathematics towards calculus. James Gregory proved, in geometric form, the relationship between differentiation and integration. That relationship is the Fundamental Theorem of Calculus.

This is not to denigrate Leibniz. We don’t dismiss the Wright Brothers though we know that without them, Alberto Santos-Dumont or Glenn Curtiss or Samuel Langley would have built a workable airplane anyway. We have Leibniz’s note, dated the 29th of October, 1675 (says Florian Cajori), writing out \int l to mean the sum of all l’s. By mid-November he was integrating functions, and writing out his work as \int f(x) dx . Any mathematics or physics or chemistry or engineering major today would recognize that. A year later he was writing things like d(x^n) = n x^{n - 1} dx , which we’d also understand if not quite care to put that way.

Though we use his notation and his basic tools we don’t exactly use Leibniz’s particular ideas of what calculus means. It’s been over three centuries since he published. It would be remarkable if he had gotten the concepts exactly and in the best of all possible forms. Much of Leibniz’s calculus builds on the idea of a differential. This is a quantity that’s smaller than any positive number but also larger than zero. How does that make sense? George Berkeley argued it made not a lick of sense. Mathematicians frowned, but conceded Berkeley was right. By the mid-19th century they had a rationale for differentials that avoided this weird sort of number.

It’s hard to avoid the differential’s lure. The intuitive appeal of “imagine moving this thing a tiny bit” is always there. In science or engineering applications it’s almost mandatory. Few things we encounter in the real world have the kinds of discontinuity that create logic problems for differentials. Even in pure mathematics, we will look at a differential equation like \frac{dy}{dx} = x and rewrite it as dy = x dx . Leibniz’s notation gives us the idea that taking derivatives is some kind of fraction. It isn’t, but in many problems we act as though it were. It works out often enough we forget that it might not.

Better, though. From the 1960s Abraham Robinson and others worked out a different idea of what real numbers are. In that, differentials have a rigorous logical definition. We call the mathematics which uses this “non-standard analysis”. The name tells something of its use. This is not to call it wrong. It’s merely not what we learn first, or necessarily at all. And it is Leibniz’s differentials. 304 years after his death there is still a lot of mathematics he could plausibly recognize.

There is still a lot of still-vital mathematics that he touched directly. Leibniz appears to be the first person to use the term “function”, for example, to describe that thing we’re plotting with a curve. He worked on systems of linear equations, and methods to find solutions if they exist. This technique is now called Gaussian elimination. We see the bundling of the equations’ coefficients he did as building a matrix and finding its determinant. We know that technique, today, as Cramer’s Rule, after Gabriel Cramer. The Japanese mathematician Seki Takakazu had discovered determinants before Leibniz, though.

Leibniz tried to study a thing he called “analysis situs”, which two centuries on would be a name for topology. My reading tells me you can get a good fight going among mathematics historians by asking whether he was a pioneer in topology. So I’ll decline to take a side in that.

In the 1680s he tried to create an algebra of thought, to turn reasoning into something like arithmetic. His goal was good: we see these ideas today as Boolean algebra, and concepts like conjunction and disjunction and negation and the empty set. Anyone studying logic knows these today. He’d also worked in something we can see as symbolic logic. Unfortunately for his reputation, the papers he wrote about that went unpublished until late in the 19th century. By then other mathematicians, like Gottlob Frege and Charles Sanders Peirce, had independently published the same ideas.

We give Leibniz’ name to a particular series that tells us the value of π:

1 - \frac13 + \frac15 - \frac17 + \frac19 - \frac{1}{11} + \cdots = \frac{\pi}{4}

(The Indian mathematician Madhava of Sangamagrama knew the formula this comes from by the 14th century. I don’t know whether Western Europe had gotten the news by the 17th century. I suspect it hadn’t.)

The drawback to using this to figure out digits of π is that it takes forever to use. Taking ten decimal digits of π demands evaluating about five billion terms. That’s not hyperbole; it just takes like forever to get its work done.

Which is something of a theme in Leibniz’s biography. He had a great many projects. Some of them even reached a conclusion. Many did not, and instead sprawled out with great ambition and sometimes insight before getting lost. Consider a practical one: he believed that the use of wind-driven propellers and water pumps could drain flooded mines. (Mines are always flooding.) In principle, he was right. But they all failed. Leibniz blamed deliberate obstruction by administrators and technicians. He even blamed workers afraid that new technologies would replace their jobs. Yet even in this failure he observed and had bracing new thoughts. The geology he learned in the mines project made him hypothesize that the Earth had been molten. I do not know the history of geology well enough to say whether this was significant to that field. It may have been another frustrating moment of insight (lucky or otherwise) ahead of its time but not connected to the mainstream of thought.

Another project, tantalizing yet incomplete: the “stepped reckoner”, a mechanical arithmetic machine. The design was to do addition and subtraction, multiplication and division. It’s a breathtaking idea. It earned him election into the (British) Royal Society in 1673. But it never was quite complete, never getting carries to work fully automatically. He never did finish it, and lost friends with the Royal Society when he moved on to other projects. He had a note describing a machine that could do some algebraic operations. In the 1690s he had some designs for a machine that might, in theory, integrate differential equations. It’s a fantastic idea. At some point he also devised a cipher machine. I do not know if this is one that was ever used in its time.

His greatest and longest-lasting unfinished project was for his employer, the House of Brunswick. Three successive Brunswick rulers were content to let Leibniz work on his many side projects. The one that Ernest Augustus wanted was a history of the Guelf family, in the House of Brunswick. One that went back to the time of Charlemagne or earlier if possible. The goal was to burnish the reputation of the house, which had just become a hereditary Elector of the Holy Roman Empire. (That is, they had just gotten to a new level of fun political intriguing. But they were at the bottom of that level.) Starting from 1687 Leibniz did good diligent work. He travelled throughout central Europe to find archival materials. He studied their context and meaning and relevance. He organized it. What he did not do, by his death in 1716, was write the thing.

It is always difficult to understand another person. Moreso someone you know only through biography. And especially someone who lived in very different times. But I do see a particular an modern personality type here. We all know someone who will work so very hard getting prepared to do a project Right that it never gets done. You might be reading the words of one right now.

Leibniz was a compulsive Society-organizer. He promoted ones in Brandenberg and Berlin and Dresden and Vienna and Saint Petersburg. None succeeded. It’s not obvious why. Leibniz was well-connected enough; he’s known to have over six hundred correspondents. Even for a time of great letter-writing, that’s a lot.

But it does seem like something about him offended others. Failing to complete big projects, like the stepped reckoner or the History of the Guelf family, seems like some of that. Anyone who knows of calculus knows of the dispute about the Newton-versus-Leibniz priority dispute. Grant that Leibniz seems not to have much fueled the quarrel. (And that modern historians agree Leibniz did not steal calculus from Newton.) Just being at the center of Drama causes people to rate you poorly.

There seems like there’s more, though. He was liked, for example, by the Electress Sophia of Hanover and her daughter Sophia Charlotte. These were the mother and the sister of Britain’s King George I. When George I ascended to the British throne he forbade Leibniz coming to London until at least one volume of the history was written. (The restriction seems fair, considering Leibniz was 27 years into the project by then.)

There are pieces in his biography that suggest a person a bit too clever for his own good. His first salaried position, for example, was as secretary to a Nuremberg alchemical society. He did not know alchemy. He passed himself off as deeply learned, though. I don’t blame him. Nobody would ever pass a job interview if they didn’t pretend to have expertise. Here it seems to have worked.

But consider, for example, his peace mission to Paris. Leibniz was born in the last years of the Thirty Years War. In that, the Great Powers of Europe battled each other in the German states. They destroyed Germany with a thoroughness not matched until World War II. Leibniz reasonably feared France’s King Louis XIV had designs on what was left of Germany. So his plan was to sell the French government on a plan of attacking Egypt and, from there, the Dutch East Indies. This falls short of an early-Enlightenment idea of rational world peace and a congress of nations. But anyone who plays grand strategy games recognizes the “let’s you and him fight” scheming. (The plan became irrelevant when France went to war with the Netherlands. The war did rope Brandenberg-Prussia, Cologne, Münster, and the Holy Roman Empire into the mess.)

God: 'T-Rex remember the other day when you said you wanted to enhance the timeline?' T-Rex: 'Absolutely!' God: 'Well why enhance it only once?' T-Rex: 'Holy cow! Why indeed? I enhance the past so there's holodecks in the present. And then I teach cavepeeps to invent those, and then return to the future and find new entertainment technology so amazing I can't even imagine it right now! I could enhance the timeline over and over until me and all the other time travellers conclude it can't possibly be enhanced any more!!' Utahraptor: 'Which leaves us with two possibilities.' T-Rex: 'Oh?' Utahraptor: 'One: time travel isn't possible and we're stuck with this timeline.' T-Rex: 'Boo! Let's ignore that one!' Utahraptor: 'Two: time travel is possible, and this timeline is absolutely the best one anyone could come up with' T-Rex: 'Boo! That one --- that one gave me the sad feelings.'
Ryan North’s Dinosaur Comics for the 20th of August, 2020. (Spoiler: time travel isn’t possible.) And while I am still just reading the comics for fun, I have a number of essays discussing aspects of Dinosaur Comics at this link.

And I have not discussed Leibniz’s work in philosophy, outside his logic. He’s respected for the theory of monads, part of the long history of trying to explain how things can have qualities. Like many he tried to find a deductive-logic argument about whether God must exist. And he proposed the notion that the world that exists is the most nearly perfect that can possibly be. Everyone has been dragging him for that ever since he said it, and they don’t look ready to stop. It’s an unfair rap, even if it makes for funny spoofs of his writing.

The optimal world may need to be badly defective in some ways. And this recognition inspires a question in me. Obviously Leibniz could come to this realization from thinking carefully about the world. But anyone working on optimization problems knows the more constraints you must satisfy, the less optimal your best-fit can be. Some things you might like may end up being lousy, because the overall maximum is more important. I have not seen anything to suggest Leibniz studied the mathematics of optimization theory. Is it possible he was working in things we now recognize as such, though? That he has notes in the things we would call Lagrange multipliers or such? I don’t know, and would like to know if anyone does.

Leibniz’s funeral was unattended by any dignitary or courtier besides his personal secretary. The Royal Academy and the Berlin Academy of Sciences did not honor their member’s death. His grave was unmarked for a half-century. And yet historians of mathematics, philosophy, physics, engineering, psychology, social science, philology, and more keep finding his work, and finding it more advanced than one would expect. Leibniz’s legacy seems to be one always rising and emerging from shade, but never being quite where it should.


And that’s enough for one day. All of the 2020 A-to-Z essays should be at this link. Both 2020 and all past A-to-Z essays should be at this link. And, as I am hosting the Playful Math Education Blog Carnival at the end of September, I am looking for any blogs, videos, books, anything educational or recreational or just interesting to read about. Thank you for your reading and your help.

My All 2020 Mathematics A to Z: Complex Numbers


Mr Wu, author of the Singapore Maths Tuition blog, suggested complex numbers for a theme. I wrote long ago a bit about what complex numbers are and how to work with them. But that hardly exhausts the subject, and I’m happy revisiting it.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Complex Numbers.

A throwaway joke somewhere in The Hitchhiker’s Guide To The Galaxy has Marvin The Paranoid Android grumble that he’s invented a square root for minus one. Marvin’s gone and rejiggered all of mathematics while waiting for something better to do. Nobody cares. It reminds us while Douglas Adams established much of a particular generation of nerd humor, he was not himself a nerd. The nerds who read The Hitchhiker’s Guide To The Galaxy obsessively know we already did that, centuries ago. Marvin’s creation was as novel as inventing “one-half”. (It may be that Adams knew, and intended Marvin working so hard on the already known as the joke.)

Anyone who’d read a pop mathematics blog like this likely knows the rough story of complex numbers in Western mathematics. The desire to find roots of polynomials. The discovery of formulas to find roots. Polynomials with numbers whose formulas demanded the square roots of negative numbers. And the discovery that sometimes, if you carried on as if the square root of a negative number made sense, the ugly terms vanished. And you got correct answers in the end. And, eventually, mathematicians relented. These things were unsettling enough to get unflattering names. To call a number “imaginary” may be more pejorative than even “negative”. It hints at the treatment of these numbers as falsework, never to be shown in the end. To call the sum of a “real” number and an “imaginary” “complex” is to warn. An expert might use these numbers only with care and deliberation. But we can count them as numbers.

I mentioned when writing about quaternions how when I learned of complex numbers I wanted to do the same trick again. My suspicion is many mathematicians do. The example of complex numbers teases us with the possibilites of other numbers. If we’ve defined \imath to be “a number that, squared, equals minus one”, what next? Could we define a \sqrt{\imath} ? How about a \log{\imath} ? Maybe something else? An arc-cosine of \imath ?

You can try any of these. They turn out to be redundant. The real numbers and \imath already let you describe any of those new numbers. You might have a flash of imagination: what if there were another number that, squared, equalled minus one, and that wasn’t equal to \imath ? Numbers that look like a + b\imath + c\jmath ? Here, and later on, a and b and c are some real numbers. b\imath means “multiply the real number b by whatever \imath is”, and we trust that this makes sense. There’s a similar setup for c and \jmath . And if you just try that, with a + b\imath + c\jmath , you get some interesting new mathematics. Then you get stuck on what the product of these two different square roots should be.

If you think of that. If all you think of is addition and subtraction and maybe multiplication by a real number? a + b\imath + c\jmath works fine. You only spot trouble if you happen to do multiplication. Granted, multiplication is to us not an exotic operation. Take that as a warning, though, of how trouble could develop. How do we know, say, that complex numbers are fine as long as you don’t try to take the log of the haversine of one of them, or some other obscurity? And that then they produce gibberish? Or worse, produce that most dread construct, a contradiction?

Here I am indebted to an essay that ten minutes ago I would have sworn was in one of the two books I still have out from the university library. I’m embarrassed to learn my error. It was about the philosophy of complex numbers and it gave me fresh perspectives. When the university library reopens for lending I will try to track back through my borrowing and find the original. I suspect, without confirming, that it may have been in Reuben Hersh’s What Is Mathematics, Really?.

The insight is that we can think of complex numbers in several ways. One fruitful way is to match complex numbers with points in a two-dimensional space. It’s common enough to pair, for example, the number 3 + 4\imath with the point at Cartesian coordinates (3, 4) . Mathematicians do this so often it can take a moment to remember that is just a convention. And there is a common matching between points in a Cartesian coordinate system and vectors. Chaining together matches like this can worry. Trust that we believe our matches are sound. Then we notice that adding two complex numbers does the same work as adding ordered coordinate pairs. If we trust that adding coordinate pairs makes sense, then we need to accept that adding complex numbers makes sense. Adding coordinate pairs is the same work as adding real numbers. It’s just a lot of them. So we’re lead to trust that if addition for real numbers works then addition for complex numbers does.

Multiplication looks like a mess. A different perspective helps us. A different way to look at where point are on the plane is to use polar coordinates. That is, the distance a point is from the origin, and the angle between the positive x-axis and the line segment connecting the origin to the point. In this format, multiplying two complex numbers is easy. Let the first complex number have polar coordinates (r_1, \theta_1) . Let the second have polar coordinates (r_2, \theta_2) . Their product, by the rules of complex numbers, is a point with polar coordinates (r_1\cdot r_2, \theta_1 + \theta_2) . These polar coordinates are real numbers again. If we trust addition and multiplication of real numbers, we can trust this for complex numbers.

If we’re confident in adding complex numbers, and confident in multiplying them, then … we’re in quite good shape. If we can add and multiply, we can do polynomials. And everything is polynomials.

We might feel suspicious yet. Going from complex numbers to points in space is calling on our geometric intuitions. That might be fooling ourselves. Can we find a different rationalization? The same result by several different lines of reasoning makes the result more believable. Is there a rationalization for complex numbers that never touches geometry?

We can. One approach is to use the mathematics of matrices. We can match the complex number a + b\imath to the sum of the matrices

a \left[\begin{tabular}{c c} 1 & 0 \\ 0 & 1 \end{tabular}\right] + b \left[\begin{tabular}{c c} 0 & 1 \\ -1 & 0  \end{tabular}\right]

Adding matrices is compelling. It’s the same work as adding ordered pairs of numbers. Multiplying matrices is tedious, though it’s not so bad for matrices this small. And it’s all done with real-number multiplication and addition. If we trust that the real numbers work, we can trust complex numbers do. If we can show that our new structure can be understood as a configuration of the old, we convince ourselves the new structure is meaningful.

The process by which we learn to trust them as numbers, guides us to learning how to trust any new mathematical structure. So here is a new thing that complex numbers can teach us, years after we have learned how to divide them. Do not attempt to divide complex numbers. That’s too much work.

In Our Time podcast repeats episode on Zeno’s Paradoxes


It seems like barely yesterday I was giving people a tip about this podcast. In Our Time, a BBC panel-discussion programme about topics of general interest, this week repeated an episode about Zeno’s Paradoxes. It originally ran in 2016.

The panel this time is two philosophers and a mathematician, which is probably about the correct blend to get the topic down. The mathematician here is Marcus du Sautoy, with the University of Oxford, who’s a renowned mathematics popularizer in his own right. That said I think he falls into a trap that we STEM types often have in talking about Zeno, that of thinking the problem is merely “how can we talk about an infinity of something”. Or “how can we talk about an infinitesimal of something”. Mathematicians have got what seem to be a pretty good hold on how to do these calculations. But that we can provide a logically coherent way to talk about, say, how a line can be composed of points with no length does not tell us where the length of a line comes from. Still, du Sautoy knows rather a few things that I don’t. (The philosophers are Barbara Sattler, with the University of St Andrews, and James Warren, with the University of Cambridge. I know nothing further of either of them.)

The episode also discusses the Quantum Zeno Effect. This is physics, not mathematics, but it’s unsettling nonetheless. The time-evolution of certain systems can be stopped, or accelerated, by frequent measurements of the system. This is not something Zeno would have been pondering. But it is a challenge to our intuition about how change ought to work.

I’ve written some of my own thoughts about some of Zeno’s paradoxes, as well as on the Sorites paradox, which is discussed along the way in this episode. And the episode has prompted new thoughts in me, particularly about what it might mean to do infinitely many things. And what a “thing” might be. This is probably a topic Zeno was hoping listeners would ponder.

My 2019 Mathematics A To Z: Zeno’s Paradoxes


Today’s A To Z term was nominated by Dina Yagodich, who runs a YouTube channel with a host of mathematics topics. Zeno’s Paradoxes exist in the intersection of mathematics and philosophy. Mathematics majors like to declare that they’re all easy. The Ancient Greeks didn’t understand infinite series or infinitesimals like we do. Now they’re no challenge at all. This reflects a belief that philosophers must be silly people who haven’t noticed that one can, say, exit a room.

This is your classic STEM-attitude of missing the point. We may suppose that Zeno of Elea occasionally exited rooms himself. That is a supposition, though. Zeno, like most philosophers who lived before Socrates, we know from other philosophers making fun of him a century after he died. Or at least trying to explain what they thought he was on about. Modern philosophers are expected to present others’ arguments as well and as strongly as possible. This even — especially — when describing an argument they want to say is the stupidest thing they ever heard. Or, to use the lingo, when they wish to refute it. Ancient philosophers had no such compulsion. They did not mind presenting someone else’s argument sketchily, if they supposed everyone already knew it. Or even badly, if they wanted to make the other philosopher sound ridiculous. Between that and the sparse nature of the record, we have to guess a bit about what Zeno precisely said and what he meant. This is all right. We have some idea of things that might reasonably have bothered Zeno.

And they have bothered philosophers for thousands of years. They are about change. The ones I mean to discuss here are particularly about motion. And there are things we do not understand about change. This essay will not answer what we don’t understand. But it will, I hope, show something about why that’s still an interesting thing to ponder.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Zeno’s Paradoxes.

When we capture a moment by photographing it we add lies to what we see. We impose a frame on its contents, discarding what is off-frame. We rip an instant out of its context. And that before considering how we stage photographs, making people smile and stop tilting their heads. We forgive many of these lies. The things excluded from or the moments around the one photographed might not alter what the photograph represents. Making everyone smile can convey the emotional average of the event in a way that no individual moment represents. Arranging people to stand in frame can convey the participation in the way a candid photograph would not.

But there remains the lie that a photograph is “a moment”. It is no such thing. We notice this when the photograph is blurred. It records all the light passing through the lens while the shutter is open. A photograph records an eighth of a second. A thirtieth of a second. A thousandth of a second. But still, some time. There is always the ghost of motion in a picture. If we do not see it, it is because our photograph’s resolution is too coarse. If we could photograph something with infinite fidelity we would see, even in still life, the wobbling of the molecules that make up a thing.

A photograph of a blurry roller coaster passing through a vertical loop.
One of the many loops of Vortex, a roller coaster at Kings Island amusement park from 1987 to 2019. Taken by me the last day of the ride’s operation; this was one of the roller coaster’s runs after 7 pm, the close of the park the last day of the season.

Which implies something fascinating to me. Think of a reel of film. Here I mean old-school pre-digital film, the thing that’s a great strip of pictures, a new one shown 24 times per second. Each frame of film is a photograph, recording some split-second of time. How much time is actually in a film, then? How long, cumulatively, was a camera shutter open during a two-hour film? I use pre-digital, strip-of-film movies for convenience. Digital films offer the same questions, but with different technical points. And I do not want the writing burden of describing both analog and digital film technologies. So I will stick to the long sequence of analog photographs model.

Let me imagine a movie. One of an ordinary everyday event; an actuality, to use the terminology of 1898. A person overtaking a walking tortoise. Look at the strip of film. There are many frames which show the person behind the tortoise. There are many frames showing the person ahead of the tortoise. When are the person and the tortoise at the same spot?

We have to put in some definitions. Fine; do that. Say we mean when the leading edge of the person’s nose overtakes the leading edge of the tortoise’s, as viewed from our camera. Or, since there must be blur, when the center of the blur of the person’s nose overtakes the center of the blur of the tortoise’s nose.

Do we have the frame when that moment happened? I’m sure we have frames from the moments before, and frames from the moments after. But the exact moment? Are you positive? If we zoomed in, would it actually show the person is a millimeter behind the tortoise? That the person is a hundredth of a millimeter ahead? A thousandth of a hair’s width behind? Suppose that our camera is very good. It can take frames representing as small a time as we need. Does it ever capture that precise moment? To the point that we know, no, it’s not the case that the tortoise is one-trillionth the width of a hydrogen atom ahead of the person?

If we can’t show the frame where this overtaking happened, then how do we know it happened? To put it in terms a STEM major will respect, how can we credit a thing we have not observed with happening? … Yes, we can suppose it happened if we suppose continuity in space and time. Then it follows from the intermediate value theorem. But then we are begging the question. We impose the assumption that there is a moment of overtaking. This does not prove that the moment exists.

Fine, then. What if time is not continuous? If there is a smallest moment of time? … If there is, then, we can imagine a frame of film that photographs only that one moment. So let’s look at its footage.

One thing stands out. There’s finally no blur in the picture. There can’t be; there’s no time during which to move. We might not catch the moment that the person overtakes the tortoise. It could “happen” in-between moments. But at least we have a moment to observe at leisure.

So … what is the difference between a picture of the person overtaking the tortoise, and a picture of the person and the tortoise standing still? A movie of the two walking should be different from a movie of the two pretending to be department store mannequins. What, in this frame, is the difference? If there is no observable difference, how does the universe tell whether, next instant, these two should have moved or not?

A mathematical physicist may toss in an answer. Our photograph is only of positions. We should also track momentum. Momentum carries within it the information of how position changes over time. We can’t photograph momentum, not without getting blurs. But analytically? If we interpret a photograph as “really” tracking the positions of a bunch of particles? To the mathematical physicist, momentum is as good a variable as position, and it’s as measurable. We can imagine a hyperspace photograph that gives us an image of positions and momentums. So, STEM types show up the philosophers finally, right?

Hold on. Let’s allow that somehow we get changes in position from the momentum of something. Hold off worrying about how momentum gets into position. Where does a change in momentum come from? In the mathematical physics problems we can do, the change in momentum has a value that depends on position. In the mathematical physics problems we have to deal with, the change in momentum has a value that depends on position and momentum. But that value? Put it in words. That value is the change in momentum. It has the same relationship to acceleration that momentum has to velocity. For want of a real term, I’ll call it acceleration. We need more variables. An even more hyperspatial film camera.

… And does acceleration change? Where does that change come from? That is going to demand another variable, the change-in-acceleration. (The “jerk”, according to people who want to tell you that “jerk” is a commonly used term for the change-in-acceleration, and no one else.) And the change-in-change-in-acceleration. Change-in-change-in-change-in-acceleration. We have to invoke an infinite regression of new variables. We got here because we wanted to suppose it wasn’t possible to divide a span of time infinitely many times. This seems like a lot to build into the universe to distinguish a person walking past a tortoise from a person standing near a tortoise. And then we still must admit not knowing how one variable propagates into another. That a person is wide is not usually enough explanation of how they are growing taller.

Numerical integration can model this kind of system with time divided into discrete chunks. It teaches us some ways that this can make logical sense. It also shows us that our projections will (generally) be wrong. At least unless we do things like have an infinite number of steps of time factor into each projection of the next timestep. Or use the forecast of future timesteps to correct the current one. Maybe use both. These are … not impossible. But being “ … not impossible” is not to say satisfying. (We allow numerical integration to be wrong by quantifying just how wrong it is. We call this an “error”, and have techniques that we can use to keep the error within some tolerated margin.)

So where has the movement happened? The original scene had movement to it. The movie seems to represent that movement. But that movement doesn’t seem to be in any frame of the movie. Where did it come from?

We can have properties that appear in a mass which don’t appear in any component piece. No molecule of a substance has a color, but a big enough mass does. No atom of iron is ferromagnetic, but a chunk might be. No grain of sand is a heap, but enough of them are. The Ancient Greeks knew this; we call it the Sorites paradox, after Eubulides of Miletus. (“Sorites” means “heap”, as in heap of sand. But if you had to bluff through a conversation about ancient Greek philosophers you could probably get away with making up a quote you credit to Sorites.) Could movement be, in the term mathematical physicists use, an intensive property? But intensive properties are obvious to the outside observer of a thing. We are not outside observers to the universe. It’s not clear what it would mean for there to be an outside observer to the universe. Even if there were, what space and time are they observing in? And aren’t their space and their time and their observations vulnerable to the same questions? We’re in danger of insisting on an infinite regression of “universes” just so a person can walk past a tortoise in ours.

We can say where movement comes from when we watch a movie. It is a trick of perception. Our eyes take some time to understand a new image. Our brains insist on forming a continuous whole story even out of disjoint ideas. Our memory fools us into remembering a continuous line of action. That a movie moves is entirely an illusion.

You see the implication here. Surely Zeno was not trying to lead us to understand all motion, in the real world, as an illusion? … Zeno seems to have been trying to support the work of Parmenides of Elea. Parmenides is another pre-Socratic philosopher. So we have about four words that we’re fairly sure he authored, and we’re not positive what order to put them in. Parmenides was arguing about the nature of reality, and what it means for a thing to come into or pass out of existence. He seems to have been arguing something like that there was a true reality that’s necessary and timeless and changeless. And there’s an apparent reality, the thing our senses observe. And in our sensing, we add lies which make things like change seem to happen. (Do not use this to get through your PhD defense in philosophy. I’m not sure I’d use it to get through your Intro to Ancient Greek Philosophy quiz.) That what we perceive as movement is not what is “really” going on is, at least, imaginable. So it is worth asking questions about what we mean for something to move. What difference there is between our intuitive understanding of movement and what logic says should happen.

(I know someone wishes to throw down the word Quantum. Quantum mechanics is a powerful tool for describing how many things behave. It implies limits on what we can simultaneously know about the position and the time of a thing. But there is a difference between “what time is” and “what we can know about a thing’s coordinates in time”. Quantum mechanics speaks more to the latter. There are also people who would like to say Relativity. Relativity, special and general, implies we should look at space and time as a unified set. But this does not change our questions about continuity of time or space, or where to find movement in both.)

And this is why we are likely never to finish pondering Zeno’s Paradoxes. In this essay I’ve only discussed two of them: Achilles and the Tortoise, and The Arrow. There are two other particularly famous ones: the Dichotomy, and the Stadium. The Dichotomy is the one about how to get somewhere, you have to get halfway there. But to get halfway there, you have to get a quarter of the way there. And an eighth of the way there, and so on. The Stadium is the hardest of the four great paradoxes to explain. This is in part because the earliest writings we have about it don’t make clear what Zeno was trying to get at. I can think of something which seems consistent with what’s described, and contrary-to-intuition enough to be interesting. I’m satisfied to ponder that one. But other people may have different ideas of what the paradox should be.

There are a handful of other paradoxes which don’t get so much love, although one of them is another version of the Sorites Paradox. Some of them the Stanford Encyclopedia of Philosophy dubs “paradoxes of plurality”. These ask how many things there could be. It’s hard to judge just what he was getting at with this. We know that one argument had three parts, and only two of them survive. Trying to fill in that gap is a challenge. We want to fill in the argument we would make, projecting from our modern idea of this plurality. It’s not Zeno’s idea, though, and we can’t know how close our projection is.

I don’t have the space to make a thematically coherent essay describing these all, though. The set of paradoxes have demanded thought, even just to come up with a reason to think they don’t demand thought, for thousands of years. We will, perhaps, have to keep trying again to fully understand what it is we don’t understand.


And with that — I find it hard to believe — I am done with the alphabet! All of the Fall 2019 A-to-Z essays should appear at this link. Additionally, the A-to-Z sequences of this and past years should be at this link. Tomorrow and Saturday I hope to bring up some mentions of specific past A-to-Z essays. Next week I hope to share my typical thoughts about what this experience has taught me, and some other writing about this writing.

Thank you, all who’ve been reading, and who’ve offered topics, comments on the material, or questions about things I was hoping readers wouldn’t notice I was shorting. I’ll probably do this again next year, after I’ve had some chance to rest.

In Which I Learn A Thing About Non-Euclidean Geometries


I got a book about the philosophy of mathematics, Stephan Körner’s The Philosophy of Mathematics: An Introductory Essay. It’s a subject I’m interested in, despite my lack of training. Made it to the second page before I got to something that I had to stop and ponder. I thought to share that point and my reflections with you, because if I had to think I may as well get an essay out of it. He lists some pure-mathematical facts and some applied-mathematical counterparts, among them:

  • (2) any (Euclidean) triangle which is equiangular is also equilateral
  • (5) if the angles of a triangular piece of paper are equal then its sides are also equal

So where I stopped was: what is the (Euclidean) doing in that first proposition there? Or, its counterpart, about being pieces of paper?

I’m not versed in non-Euclidean geometry. My training brought me to applied-physics applications right away. I never needed a full course in non-Euclidean geometries and have never picked up much on my own. It’s an oversight I’m embarrassed by and I sometimes think to take a proper class. So this bit about equiangular-triangles not necessarily being equilateral was new to me.

Euclidean geometry everyone knows; it’s the way space works on table tops and in normal rooms. Non-Euclidean geometries are harder to understand. It was surprisingly late that mathematicians understood they were legitimate. There are two classes of non-Euclidean geometries. One is “spherical geometries”, the way geometry works … on the surface of a sphere or a squished-out ball. This is familiar enough to people who navigate or measure large journeys on the surface of the Earth. Well. The other non-Euclidean geometry is “hyperbolic geometry”. This is how shapes work on the surface of a saddle shape. It’s familiar enough to … some mathematicians who work in non-Euclidean geometries and people who ride horses. Maybe also the horses.

But! Could someone as amateur as I am in this field think of an equiangular but not equilateral triangle? Hyperbolic geometries seemed sure to produce one. But it’s hard to think about shapes on saddles so I figured to use that only if I absolutely had to. How about on a spherical geometry? And there I got to one of the classic non-Euclidean triangles. Imagine the surface of the Earth. Imagine a point at the North Pole. (Or the South Pole, if you’d rather.) Imagine a point on the equator at longitude 0 degrees. And imagine another point on the equator at longitude 90 degrees, east or west as you like. Draw the lines connecting those three points. That’s a triangle with three right angles on its interior, which is exactly the sort of thing you can’t have in Euclidean geometry.

(Which gives me another question that proves how ignorant I am of the history of mathematics. This is an easy-to-understand example. You don’t even need to be an Age of Exploration navigator to understand it. You only need a globe. Or a ball. So why did it take so long for mathematicians to accept the existence of non-Euclidean geometries? My guess is that maybe they understood this surface stuff as a weird feature of solid geometries, rather than an internally consistent geometry. But I defer to anyone who actually knows something about the history of non-Euclidean geometries to say.)

And that’s fine, but it’s also an equilateral triangle. I can imagine smaller equiangular triangles. Ones with interior angles nearer to 60 degrees each. They have to be smaller, but that’s all right. They all seem to be equilateral, though. The closer to 60 degree angles the smaller the triangle is and the more everything looks like it’s on a flat surface. Like a piece of paper.

So. Hyperbolic geometry, and the surface of a saddle, after all? Oh dear I hope not. Maybe I could look at something else.

So while I, and many people, talk about spherical geometry, it doesn’t have to be literally the geometry of the surface of a sphere. It can be other nice, smooth shapes. Ellipsoids, for example, spheres that have got stretched out in one direction or other. For example, what if we took that globe and stretched it out some? Leave the equatorial diameter at (say) twelve inches. But expand it so that the distance from North Pole to South Pole is closer to 480 miles. This may seem extreme. But one of the secrets of mathematicians is to consider cartoonishly extreme exaggerations. They’re often useful in getting your intuition to show that something must be so.

Ah, now. Consider that North Pole-Equator-Equator triangle I had before. The North-Pole-to-equator-point distance is right about 240 miles. The equator-point-to-other-equator-point distance is more like nine and a half inches. Definitely not equilateral. But it’s equiangular; all the interior angles are 90 degrees still.

My doubts refuted! And I didn’t have to consider the saddle shape. Very good. “Euclidean” is doing some useful work in that proposition. And the specification that the triangles are pieces of paper does the same work.

And yes, I know that all the real mathematicians out there are giggling at me. This has to be pretty near the first thing one learns in non-Euclidean geometry. It’s so easy to run across, and it so defies ordinary-world intuition. I ought to take a class.

Reading the Comics, June 23, 2018: Big Duck Energy Edition


I didn’t have even some good nonsense for this edition’s title and it’s a day late already. And that for only having a couple of comics, most of them reruns. And then this came across my timeline:

Please let it not be a big milkshake duck. I can’t take it if it is.

Larry Wright’s Motley for the 21st uses mathematics as emblem of impossibly complicated stuff to know. I’m interested to see that biochemistry was also called in to represent something that needs incredible brainpower to know things that can be expressed in one panel. Another free little question: what might “2,368 to the sixth power times pi” be an answer to? The obvious answer to me is “what’s the area of a circle of radius 2,368 to the third power”. That seems like a bad quiz-show question to me, though. It tests a legitimate bit of trivia, but the radius is such an ugly number. There are some other obvious questions that might fit, like “what is the circumference of a circle of radius [ or diameter ] of (ugly number here)?” Or “what is the volume of a circle of radius (similarly ugly number here)?” But the radius (or diameter) of those surfaces would have to be really nasty numbers, ones with radicals of 2,368 — itself no charming number — in it.

Debbie, yelling at the TV: 'Hydromononucleatic acid! 2,368 to the sixth power times pi, stupid!' (Walking away, disgusted.) 'I can't believe the questions on these game shows are so easy and no one ever gets them!'
Larry Wright’s Motley rerun for the 21st of June, 2018. It originally ran sometime in 1997.

And “2,368 to the sixth power times pi” is the answer to infinitely many questions. The challenge is finding one that’s plausible as a quiz-show question. That is it should test something that’s reasonable for a lay person to know, and to calculate while on stage, without pen or paper or much time to reflect. Tough set of constraints, especially to get that 2,368 in there. The sixth power isn’t so easy either.

Well, the biochemistry people don’t have an easy time thinking of a problem to match Debbie’s answer either. “Hydro- ” and “mono- ” are plausible enough prefixes, but as far as I know there’s no “nucleatic acid” to have some modified variant. Wright might have been thinking of nucleic acid, but as far as I know there’s no mononucleic acid, much less hydromononucleic acid. But, yes, that’s hardly a strike against the premise of the comic. It’s just nitpicking.

[ During his first day at Math Camp, Jim Smith learns the hard way he's not a numbers person. ] Coach: 'The ANSWER, Mr Smith?' (Smith's head pops open, ejecting a brain, several nuts, and a few screws; he says the null symbol.)
Charlie Pondrebarac’s CowTown rerun for the 22nd of June, 2018. I don’t know when it first ran, but it seems to be older than most of the CowTown reruns offered.

Charlie Pondrebarac’s CowTown for the 22nd is on at least its third appearance since I started reading the comics for the mathematics stuff regularly. I covered it in June 2016 and also in August 2015. This suggests a weird rerun cycle for the comic. Popping out of Jim Smith’s mouth is the null symbol, which represents a set that hasn’t got any elements. That set is known as the null set. Every set, including the null set, contains a null set. This fact makes set theory a good bit easier than it otherwise would be. That’s peculiar, considering that it is literally nothing. But everything one might want to say about “nothing” is peculiar. That doesn’t make it dispensable.

Marlene: 'Timmy's school says that all 7th and 8th graders have to buy a $98 calculator for math this year!' Burl: 'Whatever happened to timesing and minusing in your head?' Dale: 'I remember all we had to get for math was a slide rule for drawing straight lines and a large eraser.' (On the TV is 'The Prices Is Right, Guest Host Stephen Hawking', and they have a videotape of 'A Beautiful Mind'.)
Julie Larson’s Dinette Set rerun for the 22nd of June, 2018. It originally ran the 15th of August, 2007. Don’t worry about what’s on the TV, what’s on the videotape box, or Marlene’s ‘Gladys Kravitz Active Wear’ t-shirt; they’re side jokes, not part of the main punchline of the strip. Ditto the + and – coffee mugs.

Julie Larson’s Dinette Set for the 22nd sees the Penny family’s adults bemoaning the calculator their kid needs for middle school. I admit feeling terror at being expected to buy a hundred-dollar calculator for school. But I also had one (less expensive) when I was in high school. It saves a lot of boring routine work. And it allows for playful discoveries about arithmetic. Some of them are cute trivialities, such as finding the Golden Ratio and similar quirks. And a calculator does do essentially the work that a slide rule might, albeit more quickly and with more digits of precision. It can’t help telling you what to calculate or why, but it can take the burden out of getting the calculation done. Still, a hundred bucks. Wow.

Couple watching a newscaster report: 'Experts are still struggling to explain how, for a few brief moments this year, two plus two equalled five.'
Tony Carrillo’s F Minus for the 23rd of June, 2018. It is not a rerun and first appeared the 23rd of June, 2018, so far as I know.

Tony Carrillo’s F Minus for the 23rd puts out the breaking of a rule of arithmetic as a whimsical, inexplicable event. A moment of two plus two equalling five, whatever it might do for the structure of the universe, would be awfully interesting for the philosophy of mathematics. Given what we ordinarily think we mean by ‘two’ and ‘plus’ and ‘equals’ and ‘five’ that just can’t happen. And what would it mean for two plus to to equal five for a few moments? Mathematicians often think about the weird fact that mathematical structures — crafted from definitions and logic — describe the real world stunningly well. Would this two plus two equalling five be something that was observed in the real world, and checked against definitions that suddenly allowed this? Would this be finding a chain of reasoning that supported saying two plus two equalled five, only to find a few minutes later that a proof everyone was satisfied with was now clearly wrong?

That’s a particularly chilling prospect, if you’re in the right mood. We like to think mathematical proofs are absolute and irrefutable, things which are known to be true regardless of who knows them, or what state they’re in, or anything. And perhaps they are. They seem to come as near as mortals can to seeing Platonic forms. (My understanding is that mathematical constructs are not Platonic forms, at least in Plato’s view of things. But they are closer to being forms than, say, apples put on a table for the counting would be.) But what we actually know is whether we, fallible beings comprised of meat that thinks, are satisfied that we’ve seen a proof. We can be fooled. We can think something is satisfactory because we haven’t noticed an implication that’s obviously wrong or contradictory. Or because we’re tired and are feeling compliant. Or because we ate something that’s distracting us before we fully understand an argument. We may have a good idea of what a satisfactory logical proof would be. But stare at the idea hard enough and we realize we might never actually know one.

If you’d like to see more Reading the Comics posts, you can find them at this link. If you’re interested in the individual comics, here you go. My essays tagged with CowTown are here. Essays tagged Dinette Set are at this link. The essays that mention F Minus since I started adding strip tags are here. And this link holds the Motley comics.

Reading the Comics, July 22, 2017: Counter-mudgeon Edition


I’m not sure there is an overarching theme to the past week’s gifts from Comic Strip Master Command. If there is, it’s that I feel like some strips are making cranky points and I want to argue against their cases. I’m not sure what the opposite of a curmudgeon is. So I shall dub myself, pending a better idea, a counter-mudgeon. This won’t last, as it’s not really a good name, but there must be a better one somewhere. We’ll see it, now that I’ve said I don’t know what it is.

Rabbits at a chalkboard. 'The result is not at all what we expected, Von Thump. According to our calculations, parallel universes may exist, and we may also be able to link them with our own by wormholes that, in strictly mathematical terms, end up in a black top hat.'
Niklas Eriksson’s Carpe Diem for the 17th of July, 2017. First, if anyone isn’t thinking of that Pixar short then I’m not sure we can really understand each other. Second, ‘von Thump’ is a fine name for a bunny scientist and if it wasn’t ever used in the rich lore of Usenet group alt.devilbunnies I shall be disappointed. Third, Eriksson made an understandable but unfortunate mistake in composing this panel. While both rabbits are wearing glasses, they’re facing away from the viewer. It’s always correct to draw animals wearing eyeglasses, or to photograph them so. But we should get to see them in full eyeglass pelage. You’d think they would teach that in Cartoonist School or something.

Niklas Eriksson’s Carpe Diem for the 17th features the blackboard full of equations as icon for serious, deep mathematical work. It also features rabbits, although probably not for their role in shaping mathematical thinking. Rabbits and their breeding were used in the simple toy model that gave us Fibonacci numbers, famously. And the population of Arctic hares gives those of us who’ve reached differential equations a great problem to do. The ecosystem in which Arctic hares live can be modelled very simply, as hares and a generic predator. We can model how the populations of both grow with simple equations that nevertheless give us surprises. In a rich, diverse ecosystem we see a lot of population stability: one year where an animal is a little more fecund than usual doesn’t matter much. In the sparse ecosystem of the Arctic, and the one we’re building worldwide, small changes can have matter enormously. We can even produce deterministic chaos, in which if we knew exactly how many hares and predators there were, and exactly how many of them would be born and exactly how many would die, we could predict future populations. But the tiny difference between our attainable estimate and the reality, even if it’s as small as one hare too many or too few in our model, makes our predictions worthless. It’s thrilling stuff.

Vic Lee’s Pardon My Planet for the 17th reads, to me, as a word problem joke. The talk about how much change Marian should get back from Blake could be any kind of minor hassle in the real world where one friend covers the cost of something for another but expects to be repaid. But counting how many more nickels one person has than another? That’s of interest to kids and to story-problem authors. Who else worries about that count?

Fortune teller: 'All of your money problems will soon be solved, including how many more nickels Beth has than Jonathan, and how much change Marian should get back from Blake.'
Vic Lee’s Pardon My Planet for the 17th of July, 2017. I am surprised she had no questions about how many dimes Jonathan must have, although perhaps that will follow obviously from knowing the Beth nickel situation.

Jef Mallet’s Frazz for the 17th straddles that triple point joining mathematics, philosophy, and economics. It seems sensible, in an age that embraces the idea that everything can be measured, to try to quantify happiness. And it seems sensible, in age that embraces the idea that we can model and extrapolate and act on reasonable projections, to try to see what might improve our happiness. This is so even if it’s as simple as identifying what we should or shouldn’t be happy about. Caulfield is circling around the discovery of utilitarianism. It’s a philosophy that (for my money) is better-suited to problems like how ought the city arrange its bus lines than matters too integral to life. But it, too, can bring comfort.

Corey Pandolph’s Barkeater Lake rerun for the 20th features some mischievous arithmetic. I’m amused. It turns out that people do have enough of a number sense that very few people would let “17 plus 79 is 4,178” pass without comment. People might not be able to say exactly what it is, on a glance. If you answered that 17 plus 79 was 95, or 102, most people would need to stop and think about whether either was right. But they’re likely to know without thinking that it can’t be, say, 56 or 206. This, I understand, is so even for people who aren’t good at arithmetic. There is something amazing that we can do this sort of arithmetic so well, considering that there’s little obvious in the natural world that would need the human animal to add 17 and 79. There are things about how animals understand numbers which we don’t know yet.

Alex Hallatt’s Human Cull for the 21st seems almost a direct response to the Barkeater Lake rerun. Somehow “making change” is treated as the highest calling of mathematics. I suppose it has a fair claim to the title of mathematics most often done. Still, I can’t get behind Hallatt’s crankiness here, and not just because Human Cull is one of the most needlessly curmudgeonly strips I regularly read. For one, store clerks don’t need to do mathematics. The cash registers do all the mathematics that clerks might need to do, and do it very well. The machines are cheap, fast, and reliable. Not using them is an affectation. I’ll grant it gives some charm to antiques shops and boutiques where they write your receipt out by hand, but that’s for atmosphere, not reliability. And it is useful the clerk having a rough idea what the change should be. But that’s just to avoid the risk of mistakes getting through. No matter how mathematically skilled the clerk is, there’ll sometimes be a price entered wrong, or the customer’s money counted wrong, or a one-dollar bill put in the five-dollar bill’s tray, or a clerk picking up two nickels when three would have been more appropriate. We should have empathy for the people doing this work.

Some End-Of-August Mathematics Reading


I’ve found a good way to procrastinate on the next essay in the Why Stuff Can Orbit series. (I’m considering explaining all of differential calculus, or as much as anyone really needs, to save myself a little work later on.) In the meanwhile, though, here’s some interesting reading that’s come to my attention the last few weeks and that you might procrastinate your own projects with. (Remember Benchley’s Principle!)

First is Jeremy Kun’s essay Habits of highly mathematical people. I think it’s right in describing some of the worldview mathematics training instills, or that encourage people to become mathematicians. It does seem to me, though, that most everything Kun describes is also true of philosophers. I’m less certain, but I strongly suspect, that it’s also true of lawyers. These concentrations all tend to encourage thinking about we mean by things, and to test those definitions by thought experiments. If we suppose this to be true, then what implications would it have? What would we have to conclude is also true? Does it include anything that would be absurd to say? And is are the results useful enough we can accept a bit of apparent absurdity?

New York magazine had an essay: Jesse Singal’s How Researchers Discovered the Basketball “Hot Hand”. The “Hot Hand” phenomenon is one every sports enthusiast, and most casual fans, know: sometimes someone is just playing really, really well. The problem has always been figuring out whether it exists. Do anything that isn’t a sure bet long enough and there will be streaks. There’ll be a stretch where it always happens; there’ll be a stretch where it never does. That’s how randomness works.

But it’s hard to show that. The messiness of the real world interferes. A chance of making a basketball shot is not some fixed thing over the course of a career, or over a season, or even over a game. Sometimes players do seem to be hot. Certainly anyone who plays anything competitively experiences a feeling of being in the zone, during which stuff seems to just keep going right. It’s hard to disbelieve something that you witness, even experience.

So the essay describes some of the challenges of this: coming up with a definition of a “hot hand”, for one. Coming up with a way to test whether a player has a hot hand. Seeing whether they’re observed in the historical record. Singal’s essay writes about some of the history of studying hot hands. There is a lot of probability, and of psychology, and of experimental design in it.

And then there’s this intriguing question Analysis Fact Of The Day linked to: did Gaston Julia ever see a computer-generated image of a Julia Set? There are many Julia Sets; they and their relative, the Mandelbrot Set, became trendy in the fractals boom of the 1980s. If you knew a mathematics major back then, there was at least one on her wall. It typically looks like a craggly, lightning-rimmed cloud. Its shapes are not easy to imagine. It’s almost designed for the computer to render. Gaston Julia died in March of 1978. Could he have seen a depiction?

It’s not clear. The linked discussion digs up early computer renderings. It also brings up an example of a late-19th-century hand-drawn depiction of a Julia-like set, and compares it to a modern digital rendition of the thing. Numerical simulation saves a lot of tedious work; but it’s always breathtaking to see how much can be done by reason.

Why Stuff Can Orbit: Why It’s Waiting


I can’t imagine people are going to be surprised to hear this. But I have to put the “Why Stuff Can Orbit” series. It’s about central forces and what circumstances make it possible for something to have a stable orbit. I mean to get back to it. It’s just that the Theorem Thursday posts take up a lot of thinking on my part. They end up running quite long and detailed. I figure to get back to it once I’ve exhausted the Theorem Thursday topics I have in mind, which should be shortly into August.

It happens I’d run across a WordPress blog that contained the whole of the stable-central-orbits argument, in terse but legitimate terms. I wanted to link to that now but the site’s been deleted for reasons I won’t presume to guess. I have guesses. Sorry.

But for some other interesting reading, here’s a bit about Immanuel Kant:

I have long understood, and passed on, that Immanuel Kant had the insight that the laws of physics tell us things about the geometry of space and vice-versa. I haven’t had the chance yet to read Francisco Caruso and Roberto Moreira Xavier’s On Kant’s First Insight into the Problem of Space Dimensionality and its Physical Foundations. But the abstract promises “a conclusion that does not match the usually accepted interpretation of Kant’s reasoning”. I would imagine this to be an interesting introduction to the question, then, and to what might be controversial about Kant and the number of dimensions space should have. Also we need to use the word “tridimensionality” more.

Reading the Comics, November 27, 2015: 30,000 Edition


By rights, if this installment has any title it should be “confident ignorance”. That state appears in many of the strips I want to talk about. But according to WordPress, my little mathematics blog here reached its 30,000th page view at long last. This is thanks largely to spillover from The Onion AV Club discovering my humor blog and its talk about the late comic strip Apartment 3-G. But a reader is a reader. And I want to celebrate reaching that big, round number. As I write this I’m at 30,162 page views, because there were a lot of AV Club-related readers.

Bob Weber Jr’s Slylock Fox for the 23rd of November maybe shouldn’t really be here. It’s just a puzzle game that depends on the reader remembering that two rectangles put against the other can be a rectangle again. It also requires deciding whether the frame of the artwork counts as one of the rectangles. The commenters at Comics Kingdom seem unsure whether to count squares as rectangles too. I don’t see any shapes that look more clearly like squares to me. But it’s late in the month and I haven’t had anything with visual appeal in these Reading the Comics installments in a while. Later we can wonder if “counting rectangles in a painting” is the most reasonable way a secret agent has to pass on a number. It reminds me of many, many puzzle mysteries Isaac Asimov wrote that were all about complicated ways secret agents could pass one bit of information on.

'The painting (of interlocking rectangles) is really a secret message left by an informant. It reveals the address of a house where stolen artwork is being stashed. The title, Riverside, is the street name, and the total amount of rectangles is the house number. Where will Slylock Fox find the stolen artwork?
Bob Weber Jr’s Slylock Fox for the 23rd of November, 2015. I suppose the artist is lucky they weren’t hiding out at number 38, or she wouldn’t have been able to make such a compellingly symmetric diagram.

Ryan North’s Dinosaur Comics for the 23rd of November is a rerun from goodness knows when it first ran on Quantz.com. It features T Rex thinking about the Turing Test. The test, named for Alan Turing, says that while we may not know what exactly makes up an artificial intelligence, we will know it when we see it. That is the sort of confident ignorance that earned Socrates a living. (I joke. Actually, Socrates was a stonecutter. Who knew, besides the entire philosophy department?) But the idea seems hard to dispute. If we can converse with an entity in such a way that we can’t tell it isn’t human, then, what grounds do we have for saying it isn’t human?

T Rex has an idea that the philosophy department had long ago, of course. That’s to simply “be ready for any possible opening with a reasonable conclusion”. He calls this a matter of brute force. That is, sometimes, a reasonable way to solve problems. It’s got a long and honorable history of use in mathematics. The name suggests some disapproval; it sounds like the way you get a new washing machine through a too-small set of doors. But sometimes the easiest way to find an answer is to just try all the possible outcomes until you find the ones that work, or show that nothing can. If I want to know whether 319 is a prime number, I can try reasoning my way through it. Or I can divide it by all the prime numbers from 2 up to 17. (The square root of 319 is a bit under 18.) Or I could look it up in a table someone already made of the prime numbers less than 400. I know what’s easier, if I have a table already.

The problem with brute force — well, one problem — is that it can be longwinded. We have to break the problem down into each possible different case. Even if each case is easily disposed of, the number of different cases can grow far too fast to be manageable. The amount of working time required, and the amount of storage required, can easily become too much to deal with. Mathematicians, and computer scientists, have a couple approaches for this. One is getting bigger computers with more memory. We might consider this the brute force method to solving the limits of brute force methods.

Or we might try to reduce the number of possible cases, so that less work is needed. Perhaps we can find a line of reasoning that covers many cases. Working out specific cases, as brute force requires, can often give us a hint to what a general proof would look like. Or we can at least get a bunch of cases dealt with, even if we can’t get them all done.

Jim Unger’s Herman rerun for the 23rd of November turns confident ignorance into a running theme for this essay’s comic strips.

Eric Teitelbaum and Bill Teitelbaum’s Bottomliners for the 24th of November has a similar confient ignorance. This time it’s of the orders of magnitude that separate billions from trillions. I wanted to try passing off some line about how there can be contexts where it doesn’t much matter whether a billion or a trillion is at stake. But I can’t think of one that makes sense for the Man At The Business Company Office setting.

Reza Farazmand’s Poorly Drawn Lines for the 25th of November is built on the same confusion about the orders of magnitude that Bottomliners is. In this case it’s ants that aren’t sure about how big millions are, so their confusion seems more natural.

The ants are also engaged in a fun sort of recreational mathematics: can you estimate something from little information? You’ve done that right, typically, if you get the size of the number about right. That it should be millions rather than thousands or hundreds of millions; that there should be something like ten rather than ten thousand. These kinds of problems are often called Fermi Problems, after Enrico Fermi. This is the same person the Fermi Paradox is named after, but that’s a different problem. The Fermi Paradox asks if there are extraterrestrial aliens, why we don’t see evidence of them. A Fermi Problem is simpler. Its the iconic example is, “how many professional piano tuners are there in New York?” It’s easy to look up how big is the population of New York. It’s possible to estimate how many pianos there should be for a population that size. Then you can guess how often a piano needs tuning, and therefore, how many full-time piano tuners would be supported by that much piano-tuning demand. And there’s probably not many more professional piano tuners than there’s demand for. (Wikipedia uses Chicago as the example city for this, and asserts the population of Chicago to be nine million people. I will suppose this to be the Chicago metropolitan region, but that still seems high. Wikipedia says that is the rough population of the Chicago metropolitan area, but it’s got a vested interest in saying so.)

Mark Anderson’s Andertoons finally appears on the 27th. Here we combine the rational division of labor with resisting mathematics problems.

Reading the Comics, October 1, 2015: Big Questions Edition


I’m cutting the collection of mathematically-themed comic strips at the transition between months. The set I have through the 1st of October is long enough already. That’s mostly because the first couple strips suggested some big topics at least somewhat mathematically-based came up. Those are fun to reason about, but take time to introduce. So let’s jump into them.

Lincoln Pierce’s Big Nate: First Class for the 27th of September was originally published the 22nd of September, 1991. Nate and Francis trade off possession of the basketball, and a strikingly high number of successful shots in a row considering their age, in the infinitesimally sliced last second of the game. There’s a rather good Zeno’s-paradox-type-question to be made out of this. Suppose the game started with one second to go and Nate ahead by one point, since it is his strip. At one-half second to go, Francis makes a basket and takes a one point lead. At one-quarter second to go, Nate makes a basket and takes a one point lead. At one-eighth of a second to go, Francis repeats the basket; at one-sixteenth of a second, Nate does. And so on. Suppose they always make their shots, and suppose that they are able to make shots without requiring any more than half the remaining time available. Who wins, and why?

Tim Rickard’s Brewster Rockit for the 27th of September is built on the question of whether the universe might be just a computer simulation, and if so, how we might tell. Being a computer simulation is one of those things that would seem to explain why mathematics tells us so much about the universe. One can make a probabilistic argument about this. Suppose there is one universe, and there are some number of simulations of the universe. Call that number N. If we don’t know whether we’re in the real or the simulated universe, then it would seem we have an estimated probability of being in the real universe of one divided by N plus 1. The chance of being in the real universe starts out none too great and gets dismally small pretty fast.

But this does put us in philosophical difficulties. If we are in something that is a complete, logically consistent universe that cannot be escaped, how is it not “the real” universe? And if “the real” universe is accessible from within “the simulation” then how can they be separate? The question is hard to answer and it’s far outside my realm of competence anyway.

Mark Leiknes’s Cow and Boy Classics for the 27th of September originally ran the 15th of September, 2008. And it talks about the ideas of zero-point energy and a false vacuum. This is about something that seems core to cosmology: how much energy is there in a vacuum? That is, if there’s nothing in a space, how much energy is in it? Quantum mechanics tells us it isn’t zero, in part because matter and antimatter flutter into and out of existence all the time. And there’s gravity, which is hard to explain quite perfectly. Mathematical models of quantum mechanics, and gravity, make various predictions about how much the energy of the vacuum should be. Right now, the models don’t give us really good answers.

Some suggest that there might be more energy in the vacuum than we could ever use, and that if there were some way to draw it off — well, there’d never be a limit to anything ever again. I think this an overly optimistic projection. The opposite side of this suggests that if it is possible to draw energy out of the vacuum, that means it must be possible to shift empty space from its current state to a lower-energy state, much the way you can get energy out of a pile of rocks by making the rocks fall. But the lower-energy vacuum might have different physics in ways that make it very hard for us to live, or for us to exist. I think this an overly pessimistic projection. But I am not an expert in the fields, which include cosmology, quantum mechanics, and certain rather difficult tinkerings with the infinitely many.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. for the 28th of September is a joke in the form of true, but useless, word problem answers. Well, putting down a lower bound on what the answer is can help. If you knew what three times twelve was, you could get to four times twelve reliably, and that’s a help. But if you’re lost for three times twelve then you’re just stalling for time and the teacher knows it.

Paul Gilligan’s Pooch Cafe for the 28th of September uses the monkeys-on-keyboards concept. It’s shifted here to cats on a keyboard, but the principle is the same. Give a random process enough time and you can expect it to produce anything you want. It’s a matter of how long you can wait, though. And all the complications of how to make something that’s random. Cats won’t do it.

Mel Henze’s Gentle Creatures for the 29th of September is a rerun. I’m not sure when it was first printed. But it does use “ability to do mathematics” as a shorthand for “is intelligent at all”. That’s flattering to put in front of a mathematician, but I don’t think that’s really fair.

Paul Trap’s Thatababy for the 30th of September is a protest about using mathematics in real life. I’m surprised Thatababy’s Dad had an algebra teacher proclaiming differential equations would be used. Usually teachers assert that whatever they’re teaching will be useful, which is how we provide motivation.

How To Build Infinite Numbers


I had missed it, as mentioned in the above tweet. The link is to a page on the Form And Formalism blog, reprinting a translation of one of Georg Cantor’s papers in which he founded the modern understanding of sets, of infinite sets, and of infinitely large numbers. Although it gets into pretty heady topics, it doesn’t actually require a mathematical background, at least as I look at it; it just requires a willingness to follow long chains of reasoning, which I admit is much harder than algebra.

Cantor — whom I’d talked a bit about in a recent Reading The Comics post — was deeply concerned and intrigued by infinity. His paper enters into that curious space where mathematics, philosophy, and even theology blend together, since it’s difficult to talk about the infinite without people thinking of God. I admit the philosophical side of the discussion is difficult for me to follow, and the theological side harder yet, but a philosopher or theologian would probably have symmetric complaints.

The translation is provided as scans of a typewritten document, so you can see what it was like trying to include mathematical symbols in non-typeset text in the days before LaTeX (which is great at it, but requires annoying amounts of setup) or HTML (which is mediocre at it, but requires less setup) or Word (I don’t use Word) were available. Somehow, folks managed to live through times like that, but it wasn’t pretty.

Reading the Comics, February 20, 2015: 19th-Century German Mathematicians Edition


So, the mathematics comics ran away from me a little bit, and I didn’t have the chance to write up a proper post on Thursday or Friday. So I’m writing what I probably would have got to on Friday had time allowed, and there’ll be another in this sequence sooner than usual. I hope you’ll understand.

The title for this entry is basically thanks to Zach Weinersmith, because his comics over the past week gave me reasons to talk about Georg Cantor and Bernard Riemann. These were two of the many extremely sharp, extremely perceptive German mathematicians of the 19th Century who put solid, rigorously logical foundations under the work of centuries of mathematics, only to discover that this implied new and very difficult questions about mathematics. Some of them are good material for jokes.

Eric and Bill Teitelbaum’s Bottomliners panel (February 14) builds a joke around everything in some set of medical tests coming back negative, as well as the bank account. “Negative”, the word, has connotations that are … well, negative, which may inspire the question why is it a medical test coming back “negative” corresponds with what is usually good news, nothing being wrong? As best I can make out the terminology derives from statistics. The diagnosis of any condition amounts to measuring some property (or properties), and working out whether it’s plausible that the measurements could reflect the body’s normal processes, or whether they’re such that there just has to be some special cause. A “negative” result amounts to saying that we are not forced to suppose something is causing these measurements; that is, we don’t have a strong reason to think something is wrong. And so in this context a “negative” result is the one we ordinarily hope for.

Continue reading “Reading the Comics, February 20, 2015: 19th-Century German Mathematicians Edition”

A bit more about Thomas Hobbes


You might remember a post from last April, Thomas Hobbes and the Doing of Important Mathematics, timed to the renowned philosopher’s birthday. I talked about him because a good bit of his intellectual life was spent trying to achieve mathematical greatness, which he never did.

Recently I’ve had the chance to read Douglas M Jesseph’s Squaring The Circle: The War Between Hobbes And Wallis, about Hobbes’s attempts to re-build mathematics on an intellectual foundation he found more satisfying, and the conflict this put him in with mainstream mathematicians, particularly John Wallis (algebra and calculus pioneer, and popularizer of the ∞ symbol). The situation of Hobbes’s mathematical ambitions is more complicated than I realized, although the one thing history teaches us is that the situation is always more complicated than we realized, and I wanted to at least make my writings about Hobbes a bit less incomplete. Jesseph’s book can’t be fairly reduced to a blog post, of course, and I’d recommend it to people who want to really understand what the fuss was all about. It’s a very good idea to have some background in philosophy and in 17th century English history going in, though, because it turns out a lot of the struggle — and particularly the bitterness with which Hobbes and Wallis fought, for decades — ties into the religious and political struggles of England of the 1600s.

Hobbes’s project, I better understand now, was not merely the squaring of the circle or the solving of other ancient geometric problems like the doubling of the cube or the trisecting of an arbitrary angle, although he did claim to have various proofs or approximate proofs of them. He seems to have been interested in building a geometry on more materialist grounds, more directly as models of the real world, instead of the pure abstractions that held sway then (and, for that matter, now). This is not by itself a ridiculous thing to do: we are almost always better off for having multiple independent ways to construct something, because the differences in those ways teaches us not just about the thing, but about the methods we use to discover things. And purely abstract constructions have problems also: for example, if a line can be decomposed into nothing but an enormous number of points, and absolutely none of those points has any length, then how can the line have length? You can answer that, but it’s going to require a pretty long running start.

Trying to re-build the logical foundations of mathematics is an enormously difficult thing to do, and it’s not surprising that someone might fail to do so perfectly. Whole schools of mathematicians might be needed just to achieve mixed success. And Hobbes wasn’t able to attract whole schools of mathematicians, in good part because of who he was.

Hobbes achieved immortality as an important philosopher with the publication of Leviathan. What I had not appreciated and Jesseph made clear was that in the context of England of the 1650s, Hobbes’s views on the natures of God, King, Society, Law, and Authority managed to offend — in the “I do not know how I can continue to speak with a person who holds views like that” — pretty much everybody in England who had any strong opinion about anything in politics, philosophy, or religion. I do not know for a fact that Hobbes then went around kicking the pet dogs of any English folk who didn’t have strong opinions about politics, philosophy, or religion, but I can’t rule it out. At least part of the relentlessness and bitterness with which Wallis (and his supporters) attacked Hobbes, and with which Hobbes (and his supporters) attacked back, can be viewed as a spinoff of the great struggle between the Crown and Parliament that produced the Civil War, the Commonwealth, and the Restoration, and in that context it’s easier to understand why all parties carried on, often quibbling about extremely minor points, well past the point that their friends were advising them that the quibbling was making themselves look bad. Hobbes was a difficult person to side with, even when he was right, and a lot of his mathematics just wasn’t right. Some of it I’m not sure ever could be made right, however many ingenious people you had working to avoid flaws.

An amusing little point that Jesseph quotes is a bit in which Hobbes, making an argument about the rights that authority has, asserts that if the King decreed that Euclid’s Fifth Postulate should be taught as false, then false it would be in the kingdom. The Fifth Postulate, also known as the Parallel Postulate, is one of the axioms on which classical Greek geometry was built and it was always the piece that people didn’t like. The other postulates are all nice, simple, uncontroversial, common-sense things like “all right angles are equal”, the kinds of things so obvious they just have to be axioms. The Fifth Postulate is this complicated-sounding thing about how, if a line is crossed by two non-parallel lines, you can determine on which side of the first line the non-parallel lines will meet.

It wouldn’t be really understood or accepted for another two centuries, but, you can suppose the Fifth Postulate to be false. This gives you things named “non-Euclidean geometries”, and the modern understanding of the universe’s geometry is non-Euclidean. In picking out an example of something a King might decree and the people would have to follow regardless of what was really true, Hobbes picked out an example of something that could be decreed false, and that people could follow profitably.

That’s not mere ironical luck, probably. A streak of mathematicians spent a long time trying to prove the Fifth Postulate was unnecessary, at least, by showing it followed from the remaining and non-controversial postulates, or at least that it could be replaced with something that felt more axiomatic. Of course, in principle you can use any set of axioms you like to work, but some sets produce more interesting results than others. I don’t know of any interesting geometry which results from supposing “not all right angles are equal”; supposing that the Fifth Postulate is untrue gives us general relativity, which is quite nice to have.

Again I have to warn that Jesseph’s book is not always easy reading. I had to struggle particularly over some of the philosophical points being made, because I’ve got only a lay understanding of the history of philosophy, and I was able to call on my love (a professional philosopher) for help at points. I imagine someone well-versed in philosophy but inexperienced with mathematics would have a similar problem (although — don’t let the secret out — you’re allowed to just skim over the diagrams and proofs and go on to the explanatory text afterwards). But for people who want to understand the scope and meaning of the fighting better, or who just want to read long excerpts of the wonderful academic insulting that was current in the era, I do recommend it. Check your local college or university library.

Reading the Comics, September 8, 2014: What Is The Problem Edition


Must be the start of school or something. In today’s roundup of mathematically-themed comics there are a couple of strips that I think touch on the question of defining just what the problem is: what are you trying to measure, what are you trying to calculate, what are the rules of this sort of calculation? That’s a lot of what’s really interesting about mathematics, which is how I’m able to say something about a rerun Archie comic. It’s not easy work but that’s why I get that big math-blogger paycheck.

Edison Lee works out the shape of the universe, and as ever in this sort of thing, he forgot to carry a number.
I’d have thought the universe to be at least three-dimensional.

John Hambrock’s The Brilliant Mind of Edison Lee (September 2) talks about the shape of the universe. Measuring the world, or the universe, is certainly one of the older influences on mathematical thought. From a handful of observations and some careful reasoning, for example, one can understand how large the Earth is, and how far away the Moon and the Sun must be, without going past the kinds of reasoning or calculations that a middle school student would probably be able to follow.

There is something deeper to consider about the shape of space, though: the geometry of the universe affects what things can happen in them, and can even be seen in the kinds of physics that happen. A famous, and astounding, result by the mathematical physicist Emmy Noether shows that symmetries in space correspond to conservation laws. That the universe is, apparently, rotationally symmetric — everything would look the same if the whole universe were picked up and rotated (say) 80 degrees along one axis — means that there is such a thing as the conservation of angular momentum. That the universe is time-symmetric — the universe would look the same if it had got started five hours later (please pretend that’s a statement that can have any coherent meaning) — means that energy is conserved. And so on. It may seem, superficially, like a cosmologist is engaged in some almost ancient-Greek-style abstract reasoning to wonder what shapes the universe could have and what it does, but (putting aside that it gets hard to divide mathematics, physics, and philosophy in this kind of field) we can imagine observable, testable consequences of the answer.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 5) tells a joke starting with “two perfectly rational perfectly informed individuals walk into a bar”, along the way to a joke about economists. The idea of “perfectly rational perfectly informed” people is part of the mathematical modeling that’s become a popular strain of economic thought in recent decades. It’s a model, and like many models, is properly speaking wrong, but it allows one to describe interesting behavior — in this case, how people will make decisions — without complications you either can’t handle or aren’t interested in. The joke goes on to the idea that one can assign costs and benefits to continuing in the joke. The idea that one can quantify preferences and pleasures and happiness I think of as being made concrete by Jeremy Bentham and the utilitarian philosophers, although trying to find ways to measure things has been a streak in Western thought for close to a thousand years now, and rather fruitfully so. But I wouldn’t have much to do with protagonists who can’t stay around through the whole joke either.

Marc Anderson’s Andertoons (September 6) was probably composed in the spirit of joking, but it does hit something that I understand baffles kids learning it every year: that subtracting a negative number does the same thing as adding a positive number. To be fair to kids who need a couple months to feel quite confident in what they’re doing, mathematicians needed a couple generations to get the hang of it too. We have now a pretty sound set of rules for how to work with negative numbers, that’s nice and logically tested and very successful at representing things we want to know, but there seems to be a strong intuition that says “subtracting a negative three” and “adding a positive three” might just be different somehow, and we won’t really know negative numbers until that sense of something being awry is resolved.

Andertoons pops up again the next day (September 7) with a completely different drawing of a chalkboard and this time a scientist and a rabbit standing in front of it. The rabbit’s shown to be able to do more than multiply and, indeed, the mathematics is correct. Cosines and sines have a rather famous link to exponentiation and to imaginary- and complex-valued numbers, and it can be useful to change an ordinary cosine or sine into this exponentiation of a complex-valued number. Why? Mostly, because exponentiation tends to be pretty nice, analytically: you can multiply and divide terms pretty easily, you can take derivatives and integrals almost effortlessly, and then if you need a cosine or a sine you can get that out at the end again. It’s a good trick to know how to do.

Jeff Harris’s Shortcuts children’s activity panel (September 9) is a page of stuff about “Geometry”, and it’s got some nice facts (some mathematical, some historical), and a fair bunch of puzzles about the field.

Morrie Turner’s Wee Pals (September 7, perhaps a rerun; Turner died several months ago, though I don’t know how far ahead of publication he was working) features a word problem in terms of jellybeans that underlines the danger of unwarranted assumptions in this sort of problem-phrasing.

Moose has trouble working out 15 percent of $8.95; Jughead explains why.
How far back is this rerun from if Moose got lunch for two for $8.95?

Craig Boldman and Henry Scarpelli’s Archie (September 8, rerun) goes back to one of arithmetic’s traditional comic strip applications, that of working out the tip. Poor Moose is driving himself crazy trying to work out 15 percent of $8.95, probably from a quiz-inspired fear that if he doesn’t get it correct to the penny he’s completely wrong. Being able to do a calculation precisely is useful, certainly, but he’s forgetting that in tis real-world application he gets some flexibility in what has to be calculated. He’d save some effort if he realized the tip for $8.95 is probably close enough to the tip for $9.00 that he could afford the difference, most obviously, and (if his budget allows) that he could just as well work out one-sixth the bill instead of fifteen percent, and give up that workload in exchange for sixteen cents.

Mark Parisi’s Off The Mark (September 8) is another entry into the world of anthropomorphized numbers, so you can probably imagine just what π has to say here.

In the Overlap between Logic, Fun, and Information


Since I do need to make up for my former ignorance of John Venn’s diagrams and how to use them, let me join in what looks early on like a massive Internet swarm of mentions of Venn. The Daily Nous, a philosophy-news blog, was my first hint that anything interesting was going on (as my love is a philosopher and is much more in tune with the profession than I am with mathematics), and I appreciate the way they describe Venn’s interesting properties. (Also, for me at least, that page recommends I read Dungeons and Dragons and Derrida, itself pointing to an installment of philosophy-based web comic Existentialist Comics, so you get a sense of how things go over there.)

https://twitter.com/saladinahmed/status/496148485092433920

And then a friend retweeted the above cartoon (available as T-shirt or hoodie), which does indeed parse as a Venn diagram if you take the left circle as representing “things with flat tails playing guitar-like instruments” and the right circle as representing “things with duck bills playing keyboard-like instruments”. Remember — my love is “very picky” about Venn diagram jokes — the intersection in a Venn diagram is not a blend of the things in the two contributing circles, but is rather, properly, something which belongs to both the groups of things.

https://twitter.com/mathshistory/status/496224786109198337

The 4th of is also William Rowan Hamilton’s birthday. He’s known for the discovery of quaternions, which are kind of to complex-valued numbers what complex-valued numbers are to the reals, but they’re harder to make a fun Google Doodle about. Quaternions are a pretty good way of representing rotations in a three-dimensional space, but that just looks like rotating stuff on the computer screen.

Daily Nous

John Venn, an English philosopher who spent much of his career at Cambridge, died in 1923, but if he were alive today he would totally be dead, as it is his 180th birthday. Venn was named after the Venn diagram, owing to the fact that as a child he was terrible at math but good at drawing circles, and so was not held back in 5th grade. In celebration of this philosopher’s birthday Google has put up a fun, interactive doodle — just for today. Check it out.

Note: all comments on this post must be in Venn Diagram form.

View original post

Where Does A Plane Touch A Sphere?


Recently my dear love, the professional philosopher, got to thinking about a plane that just touches a sphere, and wondered: where does the plane just touch the sphere? I, the mathematician, knew just what to call that: it’s the “point of tangency”, or if you want a phrasing that’s a little less Law French, the “tangent point”. The tangent to a curve is a flat surface, of one lower dimension than the space has — on the two-dimensional plane the tangent’s a line; in three-dimensional space the tangent’s a plane; in four-dimensional space the tangent’s a pain to quite visualize perfectly — and, ordinarily, it touches the original curve at just the one point, locally anyway.

But, and this is a good philosophical objection, is a “point” really anywhere? A single point has no breadth, no width, it occupies no volume. Mathematically we’d say it has measure zero. If you had a glass filled to the brim and dropped a point into it, it wouldn’t overflow. If you tried to point at the tangent point, you’d miss it. If you tried to highlight the spot with a magic marker, you couldn’t draw a mark centered on that point; the best you could do is draw out a swath that, presumably, has the point, somewhere within it, somewhere.

This feels somehow like one of Zeno’s Paradoxes, although it’s not one of the paradoxes to have come down to us, at least so far as I understand them. Those are all about the problem that there seem to be conclusions, contrary to intuition, that result from supposing that space (and time) can be infinitely divided; but, there are at least as great problems from supposing that they can’t. I’m a bit surprised by that, since it’s so easy to visualize a sphere and a plane — it almost leaps into the mind as soon as you have a fruit and a table — but perhaps we just don’t happen to have records of the Ancients discussing it.

We can work out a good deal of information about the tangent point, and staying on firm ground all the way to the end. For example: imagine the sphere sliced into a big and a small half by a plane. Imagine moving the plane in the direction of the smaller slice; this produces a smaller slice yet. Keep repeating this ad infinitum and you’d have a smaller slice, volume approaching zero, and a plane that’s approaching tangency to the sphere. But then there is that slice that’s so close to the edge of the sphere that the sphere isn’t cut at all, and there is something curious about that point.

I Know Nothing Of John Venn’s Diagram Work


My Dearly Beloved, the professional philosopher, mentioned after reading the last comics review that one thing to protest in the Too Much Coffee Man strip — showing Venn diagram cartoons and Things That Are Funny as disjoint sets — was that the Venn diagram was drawn wrong. In philosophy, you see, they’re taught to draw a Venn diagram for two sets as two slightly overlapping circles, and then to black out any parts of the diagram which haven’t got any elements. If there are three sets, you draw three overlapping circles of equal size and again black out the parts that are empty.

I granted that this certainly better form, and indispensable if you don’t know anything about what sets, intersections, and unions have any elements in them, but that it was pretty much the default in mathematics to draw the loops that represent sets as not touching if you know the intersection of the sets is empty. That did get me to wondering what the proper way of doing things was, though, and I looked it up. And, indeed, according to MathWorld, I have been doing it wrong for a very long time. Per MathWorld (which is as good a general reference for this sort of thing as I can figure), to draw a Venn diagram reflecting data for N sets, the rules are:

  1. Draw N simple, closed curves on the plane, so that the curves partition the plane into 2N connected regions.
  2. Have each subset of the N different sets correspond to one and only one region formed by the intersection of the curves.

Partitioning the plane is pretty much exactly what you might imagine from the ordinary English meaning of the world: you divide the plane into parts that are in this group or that group or some other group, with every point in the plane in exactly one of these partitions (or on the border between them). And drawing circles which never touch mean that I (and Shannon Wheeler, and many people who draw Venn diagram cartoons) are not doing that first thing right: two circles that have no overlap the way the cartoon shows partition the plane into three pieces, not four.

I can make excuses for my sloppiness. For one, I learned about Venn diagrams in the far distant past and never went back to check I was using them right. For another, the thing I most often do with Venn diagrams is work out probability problems. One approach for figuring out the probability of something happen is to identify the set of all possible outcomes of an experiment — for a much-used example, all the possible numbers that can come up if you throw three fair dice simultaneously — and identify how many of those outcomes are in the set of whatever you’re interested in — say, rolling a nine total, or rolling a prime number, or for something complicated, “rolling a prime number or a nine”. When you’ve done this, if every possible outcome is equally likely, the probability of the outcome you’re interested in is the number of outcomes that satisfy what you’re looking for divided by the number of outcomes possible.

If you get to working that way, then, you might end up writing a list of all the possible outcomes and drawing a big bubble around the outcomes that give you nine, and around the outcomes that give you a prime number, and those aren’t going to touch for the reasons you’d expect. I’m not sure that this approach is properly considered a Venn diagram anymore, though, although I’d introduced it in statistics classes as such and seen it called that in the textbook. There might not be a better name for it, but it is doing violence to the Venn diagram concept and I’ll try to be more careful in future.

The Mathworld page, by the way, provides a couple examples of Venn diagrams for more than three propositions, down towards the bottom of the page. The last one that I can imagine being of any actual use is the starfish shape used to work out five propositions at once. That shows off 32 possible combinations of sets and I can barely imagine finding that useful as a way to visualize the relations between things. There are also representations based on seven sets, which have 128 different combinations, and for 11 propositions, a mind-boggling 2,048 possible combinations. By that point the diagram is no use for visualizing relationships of sets and is simply mathematics as artwork.

Something else I had no idea bout is that if you draw the three-circle Venn diagram, and set it so that the intersection of any two circles is at the center of the third, then the innermost intersection is a Reuleaux triangle, one of those oddball shapes that rolls as smoothly as a circle without actually being a circle. (MathWorld has an animated gif showing it rolling so.) This figure, it turns out, is also the base for something called the Henry Watt square drill bit. It can be used as a spinning drill bit to produce a (nearly) square hole, which is again pretty amazing as I make these things out, and which my father will be delighted to know I finally understand or have heard of.

In any case, the philosophy department did better teaching Venn diagrams properly than whatever math teacher I picked them up from did, or at least, my spouse retained the knowledge better than I did.

The Big Zero


I want to try re-proving the little point from last time, that the chance of picking one specific number from the range of zero to one is actually zero. This might not seem like a big point but it can be done using a mechanism that turns out to be about three-quarters of all the proofs in real analysis, which is probably the most spirit-crushing of courses you take as a mathematics undergraduate, and I like that it can be shown in a way that you can understand without knowing anything more sophisticated than the idea of “less than or equal to”.

So here’s my proposition: that the probability of selecting the number 1/2 from the range of numbers running from zero to one, is zero. This is assuming that you’re equally likely to pick any number. The technique I mean to use, and it’s an almost ubiquitous one, is to show that the probability has to be no smaller than zero, and no greater than zero, and therefore it has to be exactly zero. Very many proofs are done like this, showing that the thing you want can’t be smaller than some number, and can’t be greater than that same number, and we thus prove that it has to be that number.

Showing that the probability of picking exactly 1/2 can’t be smaller than zero is easy: the probability of anything is a number greater than or equal to zero, and less than or equal to one. (A few bright people have tried working out ways to treat probabilities that can be negative numbers, or that can be greater than one, but nobody’s come up with a problem that these approaches solve in a compelling way, and it’s really hard to figure out what a negative probability would mean in the observable world, so we leave the whole idea for someone after us to work out.) That was easy enough.

Continue reading “The Big Zero”

Split Lines


My spouse, the professional philosopher, was sharing some of the engagingly wrong student responses. I hope it hasn’t shocked you to learn your instructors do this, but, if you got something wrong in an amusing way, and it was easy to find someone to commiserate with, yes, they said something.

The particular point this time was about Plato’s Analogy of the Divided Line, part of a Socratic dialogue that tries to classify the different kinds of knowledge. I’m not informed enough to describe fairly the point Plato was getting at, but the mathematics is plain enough. It starts with a line segment that gets divided into two unequal parts; each of the two parts is then divided into parts of the same proportion. Why this has to be I’m not sure (my understanding is it’s not clear exactly why Plato thought it important they be unequal parts), although it has got the interesting side effect of making exactly two of the four line segments of equal length.

Continue reading “Split Lines”

Kenneth Appel and Colored Maps


Word’s come through mathematics circles about the death of Kenneth Ira Appel, who along with Wolgang Haken did one of those things every mathematically-inclined person really wishes to do: solve one of the long-running unsolved problems of mathematics. Even better, he solved one of those accessible problems. There are a lot of great unsolved problems that take a couple paragraphs just to set up for the lay audience (who then will wonder what use the problem is, as if that were the measure of interesting); Appel and Haken’s was the Four Color Theorem, which people can understand once they’ve used crayons and coloring books (even if they wonder whether it’s useful for anyone besides Hammond).

It was, by everything I’ve read, a controversial proof at the time, although by the time I was an undergraduate the controversy had faded the way controversial stuff doesn’t seem that exciting decades on. The proximate controversy was that much of the proof was worked out by computer, which is the sort of thing that naturally alarms people whose jobs are to hand-carve proofs using coffee and scraps of lumber. The worry about that seems to have faded as more people get to use computers and find they’re not putting the proof-carvers out of work to any great extent, and as proof-checking software gets up to the task of doing what we would hope.

Still, the proof, right as it probably is, probably offers philosophers of mathematics a great example for figuring out just what is meant by a “proof”. The word implies that a proof is an argument which convinces a person of some proposition. But the Four Color Theorem proof is … well, according to Appel and Haken, 50 pages of text and diagrams, with 85 pages containing an additional 2,500 diagrams, and 400 microfiche pages with additional diagrams of verifications of claims made in the main text. I’ll never read all that, much less understand all that; it’s probably fair to say very few people ever will.

So I couldn’t, honestly, say it was proved to me. But that’s hardly the standard for saying whether something is proved. If it were, then every calculus class would produce the discovery that just about none of calculus has been proved, and that this whole “infinite series” thing sounds like it’s all doubletalk made up on the spot. And yet, we could imagine — at least, I could imagine — a day when none of the people who wrote the proof, or verified it for publication, or have verified it since then, are still alive. At that point, would the theorem still be proved?

(Well, yes: the original proof has been improved a bit, although it’s still a monstrously large one. And Neil Robertson, Daniel P Sanders, Paul Seymour, and Robin Thomas published a proof, similar in spirit but rather smaller, and have been distributing the tools needed to check their work; I can’t imagine there being nobody alive who hasn’t done, or at least has the ability to do, the checking work.)

I’m treading into the philosophy of mathematics, and I realize my naivete about questions like what constitutes a proof are painful to anyone who really studies the field. I apologize for inflicting that pain.

Lose the Change


My Dearly Beloved, a professional philosopher, had explained to me once a fine point in the theory of just what it means to know something. I wouldn’t presume to try explaining that point (though I think I have it), but a core part of it is the thought experiment of remembering having put some change — we used a dime and a nickel — in your pocket, and finding later that you did have that same amount of money although not necessarily the same change — say, that you had three nickels instead.

That spun off a cute little side question that I’ll give to any needy recreational mathematician. It’s easy to imagine this problem where you remember having 15 cents in your pocket, and you do indeed have them, but you have a different number of coins from what you remember: three nickels instead of a dime and a nickel. Or you could remember having two coins, and indeed have two, but you have a different amount from what you remember: two dimes instead of a dime and a nickel.

Is it possible to remember correctly both the total number of coins you have, and the total value of those coins, while being mistaken about the number of each type? That is, could you remember rightly you have six coins and how much they add up to, but have the count of pennies, nickels, dimes, and quarters wrong? (In the United States there are also 50-cent and dollar coins minted, but they’re novelties and can be pretty much ignored. It’s all 1, 5, 10, and 25-cent pieces.) And can you prove it?

Reblog: Kant & Leibniz on Space and Implications in Geometry


Mathematicians and philosophers are fairly content to share credit for Rene Descartes, possibly because he was able to provide catchy, easy-to-popularize cornerstones for both fields.

Immanuel Kant, these days at least, is almost exclusively known as a philosopher, and that he was also a mathematician and astronomer is buried in the footnotes. If you stick to math and science popularizations you’ll probably pick up (as I did) that Kant was one of the co-founders of the nebular hypothesis, the basic idea behind our present understanding of how solar systems form, and maybe, if the book has room, that Kant had the insight that knowing gravitation falls off by an inverse-square rule implies that we live in a three-dimensional space.

Frank DeVita here writes some about Kant (and Wilhelm Leibniz)’s model of how we understand space and geometry. It’s not technical in the mathematics sense, although I do appreciate the background in Kant’s philosophy which my Dearly Beloved has given me. In the event I’d like to offer it as a way for mathematically-minded people to understand more of an important thinker they may not have realized was in their field.

Frank DeVita

        

Kant’s account of space in the Prolegomena serves as a cornerstone for his thought and comes about in a discussion of the transcendental principles of mathematics that precedes remarks on the possibility of natural science and metaphysics. Kant begins his inquiry concerning the possibility of ‘pure’ mathematics with an appeal to the nature of mathematical knowledge, asserting that it rests upon no empirical basis, and thus is a purely synthetic product of pure reason (§6). He also argues that mathematical knowledge (pure mathematics) has the unique feature of first exhibiting its concepts in a priori intuition which in turn makes judgments in mathematics ‘intuitive’ (§7.281). For Kant, intuition is prior to our sensibility and the activity of reason since the former does not grasp ‘things in themselves,’ but rather only the things that can be perceived by the senses. Thus, what we can perceive is based…

View original post 700 more words

Where Rap Music and Discrete Mathematics meet.


It’s the weekend; why not spread a bit of mathematics humor, using the basic element of mathematics humor, the Venn diagram?

Interestingly, Venn diagrams are also an overlap between Mathematics Humor and Philosophy Humor.

View original post

Some More Comic Strips


I might turn this into a regular feature. A couple more comic strips, all this week on gocomics.com, ran nice little mathematically-linked themes, and as far as I can tell I’m the only one who reads any of them so I might spread the word some.

Grant Snider’s Incidental Comics returns again with the Triangle Circus, in his strip of the 12th of March. This strip is also noteworthy for making use of “scalene”, which is also known as “that other kind of triangle” which nobody can remember the name for. (He’s had several other math-panel comic strips, and I really enjoy how full he stuffs the panels with drawings and jokes in most strips.)

Dave Blazek’s Loose Parts from the 15th of March puts up a version of the Cretan Paradox that amused me much more than I thought it would at first glance. I kept thinking back about it and grinning. (This blurs the line between mathematics and philosophy, but those lines have always been pretty blurred, particularly in the hotly disputed territory of Logic.)

Bud Fisher’s Mutt and Jeff is in reruns, of course, and shows a random scattering of strips from the 1930s and 1940s and, really, seem to show off how far we’ve advanced in efficiency in setup-and-punchline since the early 20th century. But the rerun from the 17th of March (I can’t make out the publication date, although the figures in the article probably could be used to guess at the year) does demonstrate the sort of estimating-a-value that’s good mental exercise too.

I note that where Mutt divides 150,000,000 into 700,000,000 I would instead have divided the 150 million into 750,000,000, because that’s a much easier problem, and he just wanted an estimate anyway. It would get to the estimate of ten cents a week later in the word balloon more easily that way, too. But making estimates and approximations are in part an art. But I don’t think of anything that gives me 2/3ds of a cent as an intermediate value on the way to what I want as being a good approximation.

There’s nothing fresh from Bill Whitehead’s Free Range, though I’m still reading just in case.

%d bloggers like this: