From my Fourth A-to-Z: Topology


In 2017 I reverted to just one A-to-Z per year. And I got banner art for the first time. It’s a small bit of polish that raised my apparent professionalism a whole order of magnitude. And for the letter T, I did something no pop mathematics blog had ever done before. I wrote about topology without starting from stretchy rubber doughnuts and coffee cups. Let me prove that to you now.


Today’s glossary entry comes from Elke Stangl, author of the Elkemental Force blog. I’ll do my best, although it would have made my essay a bit easier if I’d had the chance to do another topic first. We’ll get there.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Topology.

Start with a universe. Nice thing to have around. Call it ‘M’. I’ll get to why that name.

I’ve talked a fair bit about weird mathematical objects that need some bundle of traits to be interesting. So this will change the pace some. Here, I request only that the universe have a concept of “sets”. OK, that carries a little baggage along with it. We have to have intersections and unions. Those come about from having pairs of sets. The intersection of two sets is all the things that are in both sets simultaneously. The union of two sets is all the things that are in one set, or the other, or both simultaneously. But it’s hard to think of something that could have sets that couldn’t have intersections and unions.

So from your universe ‘M’ create a new collection of things. Call it ‘T’. I’ll get to why that name. But if you’ve formed a guess about why, then you know. So I suppose I don’t need to say why, now. ‘T’ is a collection of subsets of ‘M’. Now let’s suppose these four things are true.

First. ‘M’ is one of the sets in ‘T’.

Second. The empty set ∅ (which has nothing at all in it) is one of the sets in ‘T’.

Third. Whenever two sets are in ‘T’, their intersection is also in ‘T’.

Fourth. Whenever two (or more) sets are in ‘T’, their union is also in ‘T’.

Got all that? I imagine a lot of shrugging and head-nodding out there. So let’s take that. Your universe ‘M’ and your collection of sets ‘T’ are a topology. And that’s that.

Yeah, that’s never that. Let me put in some more text. Suppose we have a universe that consists of two symbols, say, ‘a’ and ‘b’. There’s four distinct topologies you can make of that. Take the universe plus the collection of sets {∅}, {a}, {b}, and {a, b}. That’s a topology. Try it out. That’s the first collection you would probably think of.

Here’s another collection. Take this two-thing universe and the collection of sets {∅}, {a}, and {a, b}. That’s another topology and you might want to double-check that. Or there’s this one: the universe and the collection of sets {∅}, {b}, and {a, b}. Last one: the universe and the collection of sets {∅} and {a, b} and nothing else. That one barely looks legitimate, but it is. Not a topology: the universe and the collection of sets {∅}, {a}, and {b}.

The number of toplogies grows surprisingly with the number of things in the universe. Like, if we had three symbols, ‘a’, ‘b’, and ‘c’, there would be 29 possible topologies. The universe of the three symbols and the collection of sets {∅}, {a}, {b, c}, and {a, b, c}, for example, would be a topology. But the universe and the collection of sets {∅}, {a}, {b}, {c}, and {a, b, c} would not. It’s a good thing to ponder if you need something to occupy your mind while awake in bed.

With four symbols, there’s 355 possibilities. Good luck working those all out before you fall asleep. Five symbols have 6,942 possibilities. You realize this doesn’t look like any expected sequence. After ‘4’ the count of topologies isn’t anything obvious like “two to the number of symbols” or “the number of symbols factorial” or something.

Are you getting ready to call me on being inconsistent? In the past I’ve talked about topology as studying what we can know about geometry without involving the idea of distance. How’s that got anything to do with this fiddling about with sets and intersections and stuff?

So now we come to that name ‘M’, and what it’s finally mnemonic for. I have to touch on something Elke Stangl hoped I’d write about, but a letter someone else bid on first. That would be a manifold. I come from an applied-mathematics background so I’m not sure I ever got a proper introduction to manifolds. They appeared one day in the background of some talk about physics problems. I think they were introduced as “it’s a space that works like normal space”, and that was it. We were supposed to pretend we had always known about them. (I’m translating. What we were actually told would be that it “works like R3”. That’s how mathematicians say “like normal space”.) That was all we needed.

Properly, a manifold is … eh. It’s something that works kind of like normal space. That is, it’s a set, something that can be a universe. And it has to be something we can define “open sets” on. The open sets for the manifold follow the rules I gave for a topology above. You can make a collection of these open sets. And the empty set has to be in that collection. So does the whole universe. The intersection of two open sets in that collection is itself in that collection. The union of open sets in that collection is in that collection. If all that’s true, then we have a manifold.

And now the piece that makes every pop mathematics article about topology talk about doughnuts and coffee cups. It’s possible that two topologies might be homeomorphic to each other. “Homeomorphic” is a term of art. But you understand it if you remember that “morph” means shape, and suspect that “homeo” is probably close to “homogenous”. Two things being homeomorphic means you can match their parts up. In the matching there’s nothing left over in the first thing or the second. And the relations between the parts of the first thing are the same as the relations between the parts of the second thing.

So. Imagine the snippet of the number line for the numbers larger than -π and smaller than π. Think of all the open sets you can use to cover that. It will have a set like “the numbers bigger than 0 and less than 1”. A set like “the numbers bigger than -π and smaller than 2.1”. A set like “the numbers bigger than 0.01 and smaller than 0.011”. And so on.

Now imagine the points that exist on a circle, if you’ve omitted one point. Let’s say it’s the unit circle, centered on the origin, and that what we’re leaving out is the point that’s exactly to the left of the origin. The open sets for this are the arcs that cover some part of this punctured circle. There’s the arc that corresponds to the angles from 0 to 1 radian measure. There’s the arc that corresponds to the angles from -π to 2.1 radians. There’s the arc that corresponds to the angles from 0.01 to 0.011 radians. You see where this is going. You see why I say we can match those sets on the number line to the arcs of this punctured circle. There’s some details to fill in here. But you probably believe me this could be done if I had to.

There’s two (or three) great branches of topology. One is called “algebraic topology”. It’s the one that makes for fun pop mathematics articles about imaginary rubber sheets. It’s called “algebraic” because this field makes it natural to study the holes in a sheet. And those holes tend to form groups and rings, basic pieces of Not That Algebra. The field (I’m told) can be interpreted as looking at functors on groups and rings. This makes for some neat tying-together of subjects this A To Z round.

The other branch is called “differential topology”, which is a great field to study because it sounds like what Mister Spock is thinking about. It inspires awestruck looks where saying you study, like, Bayesian probability gets blank stares. Differential topology is about differentiable functions on manifolds. This gets deep into mathematical physics.

As you study mathematical physics, you stop worrying about ever solving specific physics problems. Specific problems are petty stuff. What you like is solving whole classes of problems. A steady trick for this is to try to find some properties that are true about the problem regardless of what exactly it’s doing at the time. This amounts to finding a manifold that relates to the problem. Consider a central-force problem, for example, with planets orbiting a sun. A planet can’t move just anywhere. It can only be in places and moving in directions that give the system the same total energy that it had to start. And the same linear momentum. And the same angular momentum. We can match these constraints to manifolds. Whatever the planet does, it does it without ever leaving these manifolds. To know the shapes of these manifolds — how they are connected — and what kinds of functions are defined on them tells us something of how the planets move.

The maybe-third branch is “low-dimensional topology”. This is what differential topology is for two- or three- or four-dimensional spaces. You know, shapes we can imagine with ease in the real world. Maybe imagine with some effort, for four dimensions. This kind of branches out of differential topology because having so few dimensions to work in makes a lot of problems harder. We need specialized theoretical tools that only work for these cases. Is that enough to count as a separate branch? It depends what topologists you want to pick a fight with. (I don’t want a fight with any of them. I’m over here in numerical mathematics when I’m not merely blogging. I’m happy to provide space for anyone wishing to defend her branch of topology.)

But each grows out of this quite general, quite abstract idea, also known as “point-set topology”, that’s all about sets and collections of sets. There is much that we can learn from thinking about how to collect the things that are possible.

My Little 2021 Mathematics A-to-Z: Atlas


I owe Elkement thanks again for a topic. They’re author of the Theory and Practice of Trying to Combine Just Anything blog. And the subject lets me circle back around topology.

Atlas.

Mathematics is like every field in having jargon. Some jargon is unique to the field; there is no lay meaning of a “homeomorphism”. Some jargon is words plucked from the common language, such as “smooth”. The common meaning may guide you to what mathematicians want in it. A smooth function has a graph with no gaps, no discontinuities, no sharp corners; you can see smoothness in it. Sometimes the common meaning is an ambiguous help. A “series” is the sum of a sequence of numbers, that is, it is one number. Mathematicians study the series, but by looking at properties of the sequence.

So what sort of jargon is “atlas”? In common English, an atlas is a book of maps. Each map represents something different. Perhaps a different region of space. Perhaps a different scale, or a different projection altogether. The maps may show different features, or show them at different times. The maps must be about the same sort of thing. No slipping a map of Narnia in with the map of an amusement park, unless you warn of that in the title. The maps must not contradict one another. (So far as human-made things can be consistent, anyway.) And that’s the important stuff.

Atlas is the first kind of common-word jargon. Mathematicians use it to mean a collection of things. Those collected things aren’t mathematical maps. “Map” is the second type of jargon. The collected things are coordinate charts. “Coordinate chart” is a pairing of words not likely to appear in common English. But if you did encounter them? The meaning you might guess from their common use is not far off their mathematical use.

A coordinate chart is a matching of the points in an open set to normal coordinates. Euclidean coordinates, to be precise. But, you know, latitude and longitude, if it’s two dimensional. Add in the altitude if it’s three dimensions. Your x-y-z coordinates. It still counts if this is one dimension, or four dimensions, or sixteen dimensions. You’re less likely to draw a sketch of those. (In practice, you draw a sketch of a three-dimensional blob, and put N = 16 off in the corner, maybe in a box.)

These coordinate charts are on a manifold. That’s the second type of common-language jargon. Manifold, to pick the least bad of its manifold common definitions, is a “complicated object or subject”. The mathematical manifold is a surface. The things on that surface are connected by relationships that could be complicated. But the shape can be as simple as a plane or a sphere or a torus.

Every point on a coordinate chart needs some unique set of coordinates. And if a point appears on two coordinate charts, they have to be consistent. Consistent here is the matching between charts being a homeomorphism. A homeomorphism is a map, in the jargon sense. So it’s a function matching open sets on one chart to ope sets in the other chart. There’s more to it (there always is). But the important thing is that, away from the edges of the chart, we don’t create any new gaps or punctures or missing sections.

Some manifolds are easy to spot. The surface of the Earth, for example. Many are easy to come up with charts for. Think of any map of the Earth. Each point on the surface of the Earth matches some point on the sheet of paper. The coordinate chart is … let’s say how far your point is from the upper left corner of the page. (Pretend that you can measure those points precisely enough to match them to, like, the town you’re in.) Could be how far you are from the center, or the lower right corner, or whatever. These are all as good, and even count as other coordinate charts.

It’s easy to imagine that as latitude and longitude. We see maps of the world arranged by latitude and longitude so often. And that’s fine; latitude and longitude makes a good chart. But we have a problem in giving coordinates to the north and south pole. The latitude is easy but the longitude? So we have two points that can’t be covered on the map. We can save our atlas by having a couple charts. For the Earth this can be a map of most of the world arranged by latitude and longitude, and then two insets showing a disc around the north and the south poles. Thus we have an atlas of three charts.

We can make this a little tighter, reducing this to two charts. Have one that’s your normal sort of wall map, centered on the equator. Have the other be a transverse Mercator map. Make its center the great circle going through the prime meridian and the 180-degree antimeridian. Then every point on the planet, including the poles, has a neat unambiguous coordinate in at least one chart. A good chunk of the world will be on both charts. We can throw in more charts if we like, but two is enough.

The requirements to be an atlas aren’t hard to meet. So a lot of geometric structures end up being atlases. Theodore Frankel’s wonderful The Geometry of Physics introduces them on page 15. But that’s also the last appearance of “atlas”, at least in the index. The idea gets upstaged. The manifolds that the atlas charts end up being more interesting. Many problems about things in motion are easy to describe as paths traced out on manifolds. A large chunk of mathematical physics is then looking at this problem and figuring out what the space of possible behaviors looks like. What its topology is.

In a sense, the mathematical physicist might survey a problem, like a scout exploring new territory, more than solve it. This exploration brings us to directional derivatives. To tangent bundles. To other terms, jargon only partially informed by the common meanings.


And we draw to the final weeks of 2021, and of the Little 2021 Mathematics A-to-Z. All this year’s essays should be at this link. And all my glossary essays from every year should be at this link. Thank you for reading!

My Little 2021 Mathematics A-to-Z: Torus


Mr Wu, a mathematics tutor in Singapore and author of the blog about that, offered this week’s topic. It’s about one of the iconic mathematics shapes.

Torus

When one designs a board game, one has to decide what the edge of the board means. Some games make getting to the edge the goal, such as Candy Land or backgammon. Some games set their play so the edge is unreachable, such as Clue or Monopoly. Some make the edge an impassible limit, such as Go or Scrabble or Checkers. And sometimes the edge becomes something different.

Consider a strategy game like Risk or Civilization or their video game descendants like Europa Universalis. One has to be able to go east, or west, without limit. But there’s no making a cylindrical board. Or making a board infinite in extent, side to side. Instead, the game demands we connect borders. Moving east one space from just-at-the-Eastern-edge means we put the piece at just-at-the-Western-edge. As a video game this is seamless. As a tabletop game we just learn to remember those units in Alberta are not so far from Kamchatka as they look. We have the awkward point that the board doesn’t let us go over the poles. It doesn’t hurt game play: no one wants to invade Russia from the north. We can represent a boundless space on our table.

Sometimes we need more. Consider the arcade game Asteroid. The player’s spaceship hopes to survive by blasting into dust asteroids cluttered around them. The game ‘board’ is the arcade screen, a manageable slice of space. Asteroids move in any direction, often drifting off-screen. If they were out of the game, this would make victory so easy as to be unsatisfying. So the game takes a tip from the strategy games, and connects the right edge of the screen to the left. If we ask why an asteroid last seen moving to the right now appears on the left, well, there are answers. One is to say we’re in a very average segment of a huge asteroid field. There’s about as many asteroids that happen to be approaching from off-screen as recede from us. Why our local work destroying asteroids eliminates the off-screen asteroids is a mystery for the ages. Perhaps the rest of the fleet is also asteroid-clearing at about our pace. What matters is we still have to do something with the asteroids.

Almost. We’ve still got asteroids leaking away through the top and bottom. But we can use the same trick the right and left edges do. And now we have some wonderful things. One is a balanced game. Another is the space in which ship and asteroids move. It is no rectangle now, but a torus.

This is a neat space to explore. It’s unbounded, for example, just as the surface of the Earth is. Or (it appears) the actual universe is. Set your course right and your spaceship can go quite a long way without getting back to exactly where it started from, again much like the surface of the Earth or the universe. We can impersonate an unbounded space using a manageably small set of coordinates, a decent-size game board.

That’s a nice trick to have. Many mathematics problems are about how great blocks of things behave. And it’s usually easiest to model these things if there aren’t boundaries. We can, sure, but they’re hard, most of the time. So we analyze great, infinitely-extending stretches of things.

Analysis does great things. But we need sometimes to do simulations, too. Computers are, as ever, great tempting setups to this. Look at a spreadsheet with hundreds of rows and columns of cells. Each can represent a point in space, interacting with whatever’s nearby by whatever our rule is. And this can do very well … except these cells have to represent a finite territory. A million rows can’t span more than one million times the greatest distance between rows. We have to handle that.

There are tricks. One is to model the cells as being at ever-expanding distances, trusting that there are regions too dull to need much attention. Another is to give the boundary some values that, we figure, look as generic as possible. That “past here it carries on like that”. The trick that makes rhetorical sense to mention here is creating a torus, matching left edge to right, top edge to bottom. Front edge to back if it’s a three-dimensional model.

Making a torus works if a particular spot is mostly affected by its local neighborhood. This describes a lot of problems we find interesting. Many of them are in statistical mechanics, where we do a lot of problems about particules in grids that can do one of two things, depending on the locale. But many mechanics problems work like this too. If we’re interested in how a satellite orbits the Earth, we can ignore that Saturn exists, except maybe as something it might photograph.

And just making a grid into a torus doesn’t solve every problem. This is obvious if you imagine making a torus that’s two rows and two columns linked together. There won’t be much interesting behavior there. Even a reasonably large grid offers problems. There might be structures larger than the torus is across or wide, for example, worth study, and those will be missed. That we have a grid means that a shape is easier to represent if it’s horizontal or vertical. In a real continuous space there’s no directions to be partial to.

There are topology differences too. A famous result shows that four colors are enough to color any map on the plane. On the torus we need at least seven. Putting colors on things may seem like a trivial worry. But map colorings represent information about how stuff can be connected. And here’s a huge difference in these connections.

This all is about one aspect of a torus. Likely you came in wondering when I would get to talking about doughnut shapes, and the line about topology may have readied you to hear about coffee cups. The torus, like most any mathematical concept familiar enough ordinary people know the word, connects to many ideas. Some of them have more than one hole. Some have surfaces that intersect themselves. Some extend into four or more dimensions. Some are even constructs that appear in phase space, describing ways that complicated physical systems can behave. These are all reflections of this shape idea that we can learn from thinking about game boards.


This and all of this year’s Little Mathematics A to Z essays should be at this link. And the A-to-Z essays for every year should be at this link.

Homologies and Cohomologies explained quickly


I’d hoped to have a pretty substantial post today. I fell short of having time to edit the beast into shape. I apologize but hope to have that soon.

I also hope to soon have an announcement about a Mathematics A-to-Z for this year. But until then, here’s this.

Several years ago in an A-to-Z I tried to explain cohomologies. I wasn’t satisfied with it, as, in part, I couldn’t think of a good example. You know, something you could imagine demonstrating with specific physical objects. I can reel off definitions, once I look up the definitions, but there’s only so many people who can understand something from that.

Quanta Magazine recently ran an article about homologies. It’s a great piece, if we get past the introduction of topology with that doughnut-and-coffee-cup joke. (Not that it’s wrong, just that it’s tired.) It’s got pictures, too, which is great.

This I came to notice because Refurio Anachro on Mathstodon wrote a bit about it. This in a thread of toots talking about homologies and cohomologies. The thread at this link is more for mathematicians than the lay audience, unlike the Quanta Magazine article. If you’re comfortable reading about simplexes and linear operators and multifunctions you’re good. Otherwise … well, I imagine you trust that cohomologies can take care of themselves. But I feel better-informed for reading the thread. And it includes a link to a downloadable textbook in algebraic topology, useful for people who want to give that a try on their own.

Reading the Comics, April 1, 2021: Why Is Gunther Taking Algebraic Topology Edition


I’m not yet looking to discuss every comic strip with any mathematics mention. But something gnawed at me in this installment of Greg Evans and Karen Evans’s Luann. It’s about the classes Gunther says he’s taking.

The main characters in Luann are in that vaguely-defined early-adult era. They’re almost all attending a local university. They’re at least sophomores, since they haven’t been doing stories about the trauma and liberation of going off to school. How far they’ve gotten has been completely undefined. So here’s what gets me.

Gunther, looking at sewing patterns: 'You want me to sew pirate outfits?' Bets: 'I'm thinking satin brocade doublet and velvet pantaloons.' Les, not in the conversation: 'Nerd.' Gunther: 'I'm thinking algebraic topology and vector calculus homework.' (He shows his textbooks.) Les: 'And nerdier. (Les pets a cat.)
Greg Evans and Karen Evans’s Luann for the 1st of April, 2021. This and other essays discussing topics raised by Luann are at this link. The overall story here is that Bets wants to have this pirate-themed dinner and trusts Gunther, who’s rather good at clothes-making, to do the decorating.

Gunther taking vector calculus? That makes sense. Vector calculus is a standard course if you’re taking any mathematics-dependent major. It might be listed as Multivariable Calculus or Advanced Calculus or Calculus III. It’s where you learn partial derivatives, integrals along a path, integrals over a surface or volume. I don’t know Gunther’s major, but if it’s any kind of science, yeah, he’s taking vector calculus.

Algebraic topology, though. That I don’t get. Topology at all is usually an upper-level course. It’s for mathematics majors, maybe physics majors.  Not every mathematics major takes topology.   Algebraic topology is a deeper specialization of the subject. I’ve only seen courses listed as algebraic topology as graduate courses. It’s possible for an undergraduate to take a graduate-level course, yes. And it may be that Gunther is taking a regular topology course, and the instructor prefers to focus on algebraic topology.

But even a regular topology course relies on abstract algebra. Which, again, is something you’ll get as an undergraduate. If you’re a mathematics major you’ll get at least two years of algebra. And, if my experience is typical, still feel not too sure about the subject. Thing is that Intro to Abstract Algebra is something you’d plausibly take at the same time as Vector Calculus.  Then you’d get Abstract Algebra and then, if you wished, Topology.

So you see the trouble. I don’t remember anything in algebra-to-topology that would demand knowing vector calculus. So it wouldn’t mean Gunther took courses without taking the prerequisites. But it’s odd to take an advanced mathematics course at the same time as a basic mathematics course. Unless Gunther’s taking an advanced vector calculus course, which might be. Although since he wants to emphasize that he’s taking difficult courses, it’s odd to not say “advanced”. Especially if he is tossing in “algebraic” before topology.

And, yes, I’m aware of the Doylist explanation for this. The Evanses wanted courses that sound impressive and hard. And that’s all the scene demands. The joke would not be more successful if they picked two classes from my actual Junior year schedule. None of the characters have a course of study that could be taken literally. They’ve been university students full-time since 2013 and aren’t in their senior year yet. It would be fun, is all, to find a way this makes sense.


This and my other essays discussing something from the comic strips are at this link.

Some topological fun


A friend sent me this tweet, start of a thread of some mathematically neat parks.

This jungle gym has the shape of one of the classic three-dimensional representations of the Klein bottle. It’s one of pop mathematics’s favorite shapes, up there with the Möbius strip, another all-time favorite.

Both the Klein bottle and the Möbius strip have many possible appearances, for about the same reason there are many kinds of trapezoids or octagons or whatnot. Möbius strips are easy enough to make in real life. Klein bottles, not so; the shape needs four dimensions of space and we just don’t have them. We’ll represent it with a shape that loops back through itself, but a real Klein bottle wouldn’t do that, for the same reason a wireframe cube’s edges don’t intersect the way the lines of its photograph do.

It makes a good wireframe shape, though. I’m surprised not to see more playground equipment using it.

My All 2020 Mathematics A to Z: K-Theory


I should have gone with Vayuputrii’s proposal that I talk about the Kronecker Delta. But both Jacob Siehler and Mr Wu proposed K-Theory as a topic. It’s a big and an important one. That was compelling. It’s also a challenging one. This essay will not teach you K-Theory, or even get you very far in an introduction. It may at least give some idea of what the field is about.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

K-Theory.

This is a difficult topic to discuss. It’s an important theory. It’s an abstract one. The concrete examples are either too common to look interesting or are already deep into things like “tangent bundles of Sn-1”. There are people who find tangent bundles quite familiar concepts. My blog will not be read by a thousand of them this month. Those who are familiar with the legends grown around Alexander Grothendieck will nod on hearing he was a key person in the field. Grothendieck was of great genius, and also spectacular indifference to practical mathematics. Allegedly he once, pressed to apply something to a particular prime number for an example, proposed 57, which is not prime. (One does not need to be a genius to make a mistake like that. If I proposed 447 or 449 as prime numbers, how long would you need to notice I was wrong?)

K-Theory predates Grothendieck. Now that we know it’s a coherent mathematical idea we can find elements leading to it going back to the 19th century. One important theorem has Bernhard Riemann’s name attached. Henri Poincaré contributed early work too. Grothendieck did much to give the field a particular identity. Also a name, the K coming from the German Klasse. Grothendieck pioneered what we now call Algebraic K-Theory, working on the topic as a field of abstract algebra. There is also a Topological K-Theory, early work on which we thank Michael Atiyah and Friedrick Hirzebruch for. Topology is, popularly, thought of as the mathematics of flexible shapes. It is, but we get there from thinking about relationships between sets, and these are the topologies of K-Theory. We understand these now as different ways of understandings structures.

Still, one text I found described (topological) K-Theory as “the first generalized cohomology theory to be studied thoroughly”. I remember how much handwaving I had to do to explain what a cohomology is. The subject looks intimidating because of the depth of technical terms. Every field is deep in technical terms, though. These look more rarefied because we haven’t talked much, or deeply, into the right kinds of algebra and topology.

You find at the center of K-Theory either “coherent sheaves” or “vector bundles”. Which alternative depends on whether you prefer Algebraic or Topological K-Theory. Both alternatives are ways to encode information about the space around a shape. Let me talk about vector bundles because I find that easier to describe. Take a shape, anything you like. A closed ribbon. A torus. A Möbius strip. Draw a curve on it. Every point on that curve has a tangent plane, the plane that just touches your original shape, and that’s guaranteed to touch your curve at one point. What are the directions you can go in that plane? That collection of directions is a fiber bundle — a tangent bundle — at that point. (As ever, do not use this at your thesis defense for algebraic topology.)

Now: what are all the tangent bundles for all the points along that curve? Does their relationship tell you anything about the original curve? The question is leading. If their relationship told us nothing, this would not be a subject anyone studies. If you pick a point on the curve and look at its tangent bundle, and you move that point some, how does the tangent bundle change?

If we start with the right sorts of topological spaces, then we can get some interesting sets of bundles. What makes them interesting is that we can form them into a ring. A ring means that we have a set of things, and an operation like addition, and an operation like multiplication. That is, the collection of things works somewhat like the integers do. This is a comfortable familiar behavior after pondering too much abstraction.

Why create such a thing? The usual reasons. Often it turns out calculating something is easier on the associated ring than it is on the original space. What are we looking to calculate? Typically, we’re looking for invariants. Things that are true about the original shape whatever ways it might be rotated or stretched or twisted around. Invariants can be things as basic as “the number of holes through the solid object”. Or they can be as ethereal as “the total energy in a physics problem”. Unfortunately if we’re looking at invariants that familiar, K-Theory is probably too much overhead for the problem. I confess to feeling overwhelmed by trying to learn enough to say what it is for.

There are some big things which it seems well-suited to do. K-Theory describes, in its way, how the structure of a set of items affects the functions it can have. This links it to modern physics. The great attention-drawing topics of 20th century physics were quantum mechanics and relativity. They still are. The great discovery of 20th century physics has been learning how much of it is geometry. How the shape of space affects what physics can be. (Relativity is the accessible reflection of this.)

And so K-Theory comes to our help in string theory. String theory exists in that grand unification where mathematics and physics and philosophy merge into one. I don’t toss philosophy into this as an insult to philosophers or to string theoreticians. Right now it is very hard to think of ways to test whether a particular string theory model is true. We instead ponder what kinds of string theory could be true, and how we might someday tell whether they are. When we ask what things could possibly be true, and how to tell, we are working for the philosophy department.

My reading tells me that K-Theory has been useful in condensed matter physics. That is, when you have a lot of particles and they interact strongly. When they act like liquids or solids. I can’t speak from experience, either on the mathematics or the physics side.

I can talk about an interesting mathematical application. It’s described in detail in section 2.3 of Allen Hatcher’s text Vector Bundles and K-Theory, here. It comes about from consideration of the Hopf invariant, named for Heinz Hopf for what I trust are good reasons. It also comes from consideration of homomorphisms. A homomorphism is a matching between two sets of things that preserves their structure. This has a precise definition, but I can make it casual. If you have noticed that, every (American, hourlong) late-night chat show is basically the same? The host at his desk, the jovial band leader, the monologue, the show rundown? Two guests and a band? (At least in normal times.) Then you have noticed the homomorphism between these shows. A mathematical homomorphism is more about preserving the products of multiplication. Or it preserves the existence of a thing called the kernel. That is, you can match up elements and how the elements interact.

What’s important is Adams’ Theorem of the Hopf Invariant. I’ll write this out (quoting Hatcher) to give some taste of K-Theory:

The following statements are true only for n = 1, 2, 4, and 8:
a. R^n is a division algebra.
b. S^{n - 1} is parallelizable, ie, there exist n – 1 tangent vector fields to S^{n - 1} which are linearly independent at each point, or in other words, the tangent bundle to S^{n - 1} is trivial.

This is, I promise, low on jargon. “Division algebra” is familiar to anyone who did well in abstract algebra. It means a ring where every element, except for zero, has a multiplicative inverse. That is, division exists. “Linearly independent” is also a familiar term, to the mathematician. Almost every subject in mathematics has a concept of “linearly independent”. The exact definition varies but it amounts to the set of things having neither redundant nor missing elements.

The proof from there sprawls out over a bunch of ideas. Many of them I don’t know. Some of them are simple. The conditions on the Hopf invariant all that S^{n - 1} stuff eventually turns into finding values of n for for which 2^n divides 3^n - 1 . There are only three values of ‘n’ that do that. For example.

What all that tells us is that if you want to do something like division on ordered sets of real numbers you have only a few choices. You can have a single real number, R^1 . Or you can have an ordered pair, R^2 . Or an ordered quadruple, R^4 . Or you can have an ordered octuple, R^8 . And that’s it. Not that other ordered sets can’t be interesting. They will all diverge far enough from the way real numbers work that you can’t do something that looks like division.

And now we come back to the running theme of this year’s A-to-Z. Real numbers are real numbers, fine. Complex numbers? We have some ways to understand them. One of them is to match each complex number with an ordered pair of real numbers. We have to define a more complicated multiplication rule than “first times first, second times second”. This rule is the rule implied if we come to R^2 through this avenue of K-Theory. We get this matching between real numbers and the first great expansion on real numbers.

The next great expansion of complex numbers is the quaternions. We can understand them as ordered quartets of real numbers. That is, as R^4 . We need to make our multiplication rule a bit fussier yet to do this coherently. Guess what fuss we’d expect coming through K-Theory?

R^8 seems the odd one out; who does anything with that? There is a set of numbers that neatly matches this ordered set of octuples. It’s called the octonions, sometimes called the Cayley Numbers. We don’t work with them much. We barely work with quaternions, as they’re a lot of fuss. Multiplication on them doesn’t even commute. (They’re very good for understanding rotations in three-dimensional space. You can also also use them as vectors. You’ll do that if your programming language supports quaternions already.) Octonions are more challenging. Not only does their multiplication not commute, it’s not even associative. That is, if you have three octonions — call them p, q, and r — you can expect that p times the product of q-and-r would be different from the product of p-and-q times r. Real numbers don’t work like that. Complex numbers or quaternions don’t either.

Octonions let us have a meaningful division, so we could write out p \div q and know what it meant. We won’t see that for any bigger ordered set of R^n . And K-Theory is one of the tools which tells us we may stop looking.

This is hardly the last word in the field. It’s barely the first. It is at least an understandable one. The abstractness of the field works against me here. It does offer some compensations. Broad applicability, for example; a theorem tied to few specific properties will work in many places. And pure aesthetics too. Much work, in statements of theorems and their proofs, involve lovely diagrams. You’ll see great lattices of sets relating to one another. They’re linked by chains of homomorphisms. And, in further aesthetics, beautiful words strung into lovely sentences. You may not know what it means to say “Pontryagin classes also detect the nontorsion in \pi_k(SO(n)) outside the stable range”. I know I don’t. I do know when I hear a beautiful string of syllables and that is a joy of mathematics never appreciated enough.


Thank you for reading. The All 2020 A-to-Z essays should be available at this link. The essays from all A-to-Z sequence, 2015 to present, should be at this link. And I am still open for M, N, and O essay topics. Thanks for your attention.

My 2019 Mathematics A To Z: Koenigsberg Bridge Problem


Today’s A To Z term was nominated by Bunny Hugger. I’m glad to write about it. The problem is foundational to both graph theory and topology.

I’m more fluent in graph theory, and my writing will reflect that. But its critical insight involves looking at spaces and ignoring things like distance and area and angle. It is amazing that one can discard so much of geometry and still have anything to consider. What we do learn then applies to very many problems.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Königsberg Bridge Problem.

Once upon a time there was a city named Königsberg. It no longer is. It is Kaliningrad now. It’s no longer in that odd non-contiguous chunk of Prussia facing the Baltic Sea. It’s now in that odd non-contiguous chunk of Russia facing the Baltic Sea.

I put it this way because what the city evokes, to mathematicians, is a story. I do not have specific reason to think the story untrue. But it is a good story, and as I think more about history I grow more skeptical of good stories. A good story teaches, though not always the thing it means to convey.

The story is this. The city is on two sides of the Pregel river, now the Pregolya River. Two large islands are in the river. For several centuries these four land masses were connected by a total of seven bridges. And we are told that people in the city would enjoy free time with an idle puzzle. Was there a way to walk all seven bridges one and only one time? If no one did something fowl like taking a boat to cross the river, or not going the whole way across a bridge, anyway? There were enough bridges, though, and enough possible ways to cross them, that trying out every option was hopeless.

Then came Leonhard Euler. Who is himself a preposterous number of stories. Pick any major field of mathematics; there is an Euler’s Theorem at its center. Or an Euler’s Formula. Euler’s Method. Euler’s Function. Likely he brought great new light to it.

And in 1736 he solved the Königsberg Bridge Problem. The answer was to look at what would have to be true for a solution to exist. He noticed something so obvious it required genius not to dismiss it. It seems too simple to be useful. In a successful walk you enter each land mass (river bank or island) the same number of times you leave it. So if you cross each bridge exactly once, you use an even number of bridges per land mass. The exceptions are that you must start at one land mass, and end at a land mass. Maybe a different one. How you get there doesn’t count for the problem. How you leave doesn’t either. So the land mass you start from may have an odd number of bridges. So may the one you end on. So there are up to two land masses that may have an odd number of bridges.

Once this is observed, it’s easy to tell that Königsberg’s Bridges did not match that. All four land masses in Königsberg have an odd number of bridges. And so we could stop looking. It’s impossible to walk the seven bridges exactly once each in a tour, not without cheating.

Graph theoreticians, like the topologists of my prologue, now consider this foundational to their field. To look at a geographic problem and not concern oneself with areas and surfaces and shapes? To worry only about how sets connect? This guides graph theory in how to think about networks.

The city exists, as do the islands, and the bridges existed as described. So does Euler’s solution. And his reasoning is sound. The reasoning is ingenious, too. Everything hard about the problem evaporates. So what do I doubt about this fine story?

Well, I don’t know that this bridge problem was something the people of Königsberg thought about. At least not in the way it’s presented, this idle problem everyone who visited the river wondered about without trying very hard to solve. The only people I ever hear discussing this are mathematicians. And mathematicians are as fond of good stories as anyone else, and accept that when the reality is messy and ambiguous and confused. I’m not alone in having doubts. The Mathematics Association of America’s web page about the problem concedes it is “according to lore” that the people of the city had this problem.

Teo Paoletti, author of that web page, says Danzig mayor Carl Leonhard Gottlieb Ehler wrote Euler, asking for a solution. This falls short of proving that the bridges were a common subject of speculation. It does show at least that Ehler thought it worth pondering. Euler apparently did not think it was even mathematics. Not that he thought it was hard; he simply thought it didn’t depend on mathematical principles. It took only reason. But he did find something interesting: why was it not mathematics? Paoletti quotes Euler as writing:

This question is so banal, but seemed to me worthy of attention in that [neither] geometry, nor algebra, nor even the art of counting was sufficient to solve it.

I am reminded of a mathematical joke. It’s about the professor who always went on at great length about any topic, however slight. I have no idea why this should stick with me. Finally one day the professor admitted of something, “This problem is not interesting.” The students barely had time to feel relief. The professor went on: “But the reasons why it is not interesting are very interesting. So let us explore that.”

The Königsberg Bridge Problem is in the first chapter of every graph theory book ever. And it is a good graph theory problem. It may not be fair to say it created graph theory, though. Euler seems to have treated this as a little side bit of business, unrelated to his real mathematics. Graph theory as we know it — as a genre — formed in the 19th century. So did topology. In hindsight we can see how studying these bridges brought us good questions to ask, and ways to solve them. But for something like a century after Euler published this, it was just the clever solution to a recreational mathematics puzzle. It was as important as finding knight’s tours of chessboards.

That we take it as the introduction to graph theory, and maybe topology, tells us something. It is an easy problem to pose. Its solution is clever, but not obscure. It takes no long chains of complex reasoning. Many people approach mathematics problems with fear. By telling this story, we promise mathematics that feels as secure as a stroll along the riverfront. This promise is good through about chapter three, section four, where there are four definitions on one page and the notation summons obscure demons of LaTeX.

Still. Look at what the story of the bridges tells us. We notice something curious about our environment. The problem seems mathematical, or at least geographic. The problem is of no consequence. But it lingers in the mind. The obvious approaches to solving it won’t work. But think of the problem differently. The problem becomes simple. And better than simple. It guides one to new insights. In a century it gives birth to two fields of mathematics. In two centuries these are significant fields. They’re things even non-mathematicians have heard of. It’s almost a mathematician’s fantasy of insight and accomplishment.

But this does happen. The world suggests no end of little mathematics problems. Sometimes they are wonderful. Richard Feynman’s memoirs tell of his imagination being captured by a plate spinning in the air. Solving that helped him resolve a problem in developing Quantum Electrodynamics. There are more mundane problems. One of my professors in grad school remembered tossing and catching a tennis racket and realizing he didn’t know why sometimes it flipped over and sometimes didn’t. His specialty was in dynamical systems, and he could work out the mechanics of what a tennis racket should do, and when. And I know that within me is the ability to work out when a pile of books becomes too tall to stand on its own. I just need to work up to it.

The story of the Königsberg Bridge Problem is about this. Even if nobody but the mayor of Danzig pondered how to cross the bridges, and he only got an answer because he infected Euler with the need to know? It is a story of an important piece of mathematics. Good stories will tell us things that are true, which are not necessarily the things that happen in them.


Thanks for reading this. All of the Fall 2019 A To Z posts ought to be at this link. On Thursday I should publish my ‘L’ post. All of my past A To Z essays should be available at this link, And tomorrow I hope to finish off the comic strips worth just quick mentions from last week. See you then.

The Summer 2017 Mathematics A To Z: Topology


Today’s glossary entry comes from Elke Stangl, author of the Elkemental Force blog. I’ll do my best, although it would have made my essay a bit easier if I’d had the chance to do another topic first. We’ll get there.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Topology.

Start with a universe. Nice thing to have around. Call it ‘M’. I’ll get to why that name.

I’ve talked a fair bit about weird mathematical objects that need some bundle of traits to be interesting. So this will change the pace some. Here, I request only that the universe have a concept of “sets”. OK, that carries a little baggage along with it. We have to have intersections and unions. Those come about from having pairs of sets. The intersection of two sets is all the things that are in both sets simultaneously. The union of two sets is all the things that are in one set, or the other, or both simultaneously. But it’s hard to think of something that could have sets that couldn’t have intersections and unions.

So from your universe ‘M’ create a new collection of things. Call it ‘T’. I’ll get to why that name. But if you’ve formed a guess about why, then you know. So I suppose I don’t need to say why, now. ‘T’ is a collection of subsets of ‘M’. Now let’s suppose these four things are true.

First. ‘M’ is one of the sets in ‘T’.

Second. The empty set ∅ (which has nothing at all in it) is one of the sets in ‘T’.

Third. Whenever two sets are in ‘T’, their intersection is also in ‘T’.

Fourth. Whenever two (or more) sets are in ‘T’, their union is also in ‘T’.

Got all that? I imagine a lot of shrugging and head-nodding out there. So let’s take that. Your universe ‘M’ and your collection of sets ‘T’ are a topology. And that’s that.

Yeah, that’s never that. Let me put in some more text. Suppose we have a universe that consists of two symbols, say, ‘a’ and ‘b’. There’s four distinct topologies you can make of that. Take the universe plus the collection of sets {∅}, {a}, {b}, and {a, b}. That’s a topology. Try it out. That’s the first collection you would probably think of.

Here’s another collection. Take this two-thing universe and the collection of sets {∅}, {a}, and {a, b}. That’s another topology and you might want to double-check that. Or there’s this one: the universe and the collection of sets {∅}, {b}, and {a, b}. Last one: the universe and the collection of sets {∅} and {a, b} and nothing else. That one barely looks legitimate, but it is. Not a topology: the universe and the collection of sets {∅}, {a}, and {b}.

The number of toplogies grows surprisingly with the number of things in the universe. Like, if we had three symbols, ‘a’, ‘b’, and ‘c’, there would be 29 possible topologies. The universe of the three symbols and the collection of sets {∅}, {a}, {b, c}, and {a, b, c}, for example, would be a topology. But the universe and the collection of sets {∅}, {a}, {b}, {c}, and {a, b, c} would not. It’s a good thing to ponder if you need something to occupy your mind while awake in bed.

With four symbols, there’s 355 possibilities. Good luck working those all out before you fall asleep. Five symbols have 6,942 possibilities. You realize this doesn’t look like any expected sequence. After ‘4’ the count of topologies isn’t anything obvious like “two to the number of symbols” or “the number of symbols factorial” or something.

Are you getting ready to call me on being inconsistent? In the past I’ve talked about topology as studying what we can know about geometry without involving the idea of distance. How’s that got anything to do with this fiddling about with sets and intersections and stuff?

So now we come to that name ‘M’, and what it’s finally mnemonic for. I have to touch on something Elke Stangl hoped I’d write about, but a letter someone else bid on first. That would be a manifold. I come from an applied-mathematics background so I’m not sure I ever got a proper introduction to manifolds. They appeared one day in the background of some talk about physics problems. I think they were introduced as “it’s a space that works like normal space”, and that was it. We were supposed to pretend we had always known about them. (I’m translating. What we were actually told would be that it “works like R3”. That’s how mathematicians say “like normal space”.) That was all we needed.

Properly, a manifold is … eh. It’s something that works kind of like normal space. That is, it’s a set, something that can be a universe. And it has to be something we can define “open sets” on. The open sets for the manifold follow the rules I gave for a topology above. You can make a collection of these open sets. And the empty set has to be in that collection. So does the whole universe. The intersection of two open sets in that collection is itself in that collection. The union of open sets in that collection is in that collection. If all that’s true, then we have a manifold.

And now the piece that makes every pop mathematics article about topology talk about doughnuts and coffee cups. It’s possible that two topologies might be homeomorphic to each other. “Homeomorphic” is a term of art. But you understand it if you remember that “morph” means shape, and suspect that “homeo” is probably close to “homogenous”. Two things being homeomorphic means you can match their parts up. In the matching there’s nothing left over in the first thing or the second. And the relations between the parts of the first thing are the same as the relations between the parts of the second thing.

So. Imagine the snippet of the number line for the numbers larger than -π and smaller than π. Think of all the open sets you can use to cover that. It will have a set like “the numbers bigger than 0 and less than 1”. A set like “the numbers bigger than -π and smaller than 2.1”. A set like “the numbers bigger than 0.01 and smaller than 0.011”. And so on.

Now imagine the points that exist on a circle, if you’ve omitted one point. Let’s say it’s the unit circle, centered on the origin, and that what we’re leaving out is the point that’s exactly to the left of the origin. The open sets for this are the arcs that cover some part of this punctured circle. There’s the arc that corresponds to the angles from 0 to 1 radian measure. There’s the arc that corresponds to the angles from -π to 2.1 radians. There’s the arc that corresponds to the angles from 0.01 to 0.011 radians. You see where this is going. You see why I say we can match those sets on the number line to the arcs of this punctured circle. There’s some details to fill in here. But you probably believe me this could be done if I had to.

There’s two (or three) great branches of topology. One is called “algebraic topology”. It’s the one that makes for fun pop mathematics articles about imaginary rubber sheets. It’s called “algebraic” because this field makes it natural to study the holes in a sheet. And those holes tend to form groups and rings, basic pieces of Not That Algebra. The field (I’m told) can be interpreted as looking at functors on groups and rings. This makes for some neat tying-together of subjects this A To Z round.

The other branch is called “differential topology”, which is a great field to study because it sounds like what Mister Spock is thinking about. It inspires awestruck looks where saying you study, like, Bayesian probability gets blank stares. Differential topology is about differentiable functions on manifolds. This gets deep into mathematical physics.

As you study mathematical physics, you stop worrying about ever solving specific physics problems. Specific problems are petty stuff. What you like is solving whole classes of problems. A steady trick for this is to try to find some properties that are true about the problem regardless of what exactly it’s doing at the time. This amounts to finding a manifold that relates to the problem. Consider a central-force problem, for example, with planets orbiting a sun. A planet can’t move just anywhere. It can only be in places and moving in directions that give the system the same total energy that it had to start. And the same linear momentum. And the same angular momentum. We can match these constraints to manifolds. Whatever the planet does, it does it without ever leaving these manifolds. To know the shapes of these manifolds — how they are connected — and what kinds of functions are defined on them tells us something of how the planets move.

The maybe-third branch is “low-dimensional topology”. This is what differential topology is for two- or three- or four-dimensional spaces. You know, shapes we can imagine with ease in the real world. Maybe imagine with some effort, for four dimensions. This kind of branches out of differential topology because having so few dimensions to work in makes a lot of problems harder. We need specialized theoretical tools that only work for these cases. Is that enough to count as a separate branch? It depends what topologists you want to pick a fight with. (I don’t want a fight with any of them. I’m over here in numerical mathematics when I’m not merely blogging. I’m happy to provide space for anyone wishing to defend her branch of topology.)

But each grows out of this quite general, quite abstract idea, also known as “point-set topology”, that’s all about sets and collections of sets. There is much that we can learn from thinking about how to collect the things that are possible.

The Summer 2017 Mathematics A To Z: N-Sphere/N-Ball


Today’s glossary entry is a request from Elke Stangl, author of the Elkemental Force blog, which among other things has made me realize how much there is interesting to say about heat pumps. Well, you never know what’s interesting before you give it serious thought.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

N-Sphere/N-Ball.

I’ll start with space. Mathematics uses a lot of spaces. They’re inspired by geometry, by the thing that fills up our room. Sometimes we make them different by simplifying them, by thinking of the surface of a table, or what geometry looks like along a thread. Sometimes we make them bigger, imagining a space with more directions than we have. Sometimes we make them very abstract. We realize that we can think of polynomials, or functions, or shapes as if they were points in space. We can describe things that work like distance and direction and angle that work for these more abstract things.

What are useful things we know about space? Many things. Whole books full of things. Let me pick one of them. Start with a point. Suppose we have a sense of distance, of how far one thing is from one another. Then we can have an idea of the neighborhood. We can talk about some chunk of space that’s near our starting point.

So let’s agree on a space, and on some point in that space. You give me a distance. I give back to you — well, two obvious choices. One of them is all the points in that space that are exactly that distance from our agreed-on point. We know what this is, at least in the two kinds of space we grow up comfortable with. In three-dimensional space, this is a sphere. A shell, at least, centered around whatever that first point was. In two-dimensional space, on our desktop, it’s a circle. We know it can look a little weird: if we started out in a one-dimensional space, there’d be only two points, one on either side of the original center point. But it won’t look too weird. Imagine a four-dimensional space. Then we can speak of a hypersphere. And we can imagine that as being somehow a ball that’s extremely spherical. Maybe it pokes out of the rendering we try making of it, like a cartoon character falling out of the movie screen. We can imagine a five-dimensional space, or a ten-dimensional one, or something with even more dimensions. And we can conclude there’s a sphere for even that much space. Well, let it.

What are spheres good for? Well, they’re nice familiar shapes. Even if they’re in a weird number of dimensions. They’re useful, too. A lot of what we do in calculus, and in analysis, is about dealing with difficult points. Points where a function is discontinuous. Points where the function doesn’t have a value. One of calculus’s reliable tricks, though, is that we can swap information about the edge of things for information about the interior. We can replace a point with a sphere and find our work is easier.

The other thing I could give you. It’s a ball. That’s all the points that aren’t more than your distance away from our point. It’s the inside, the whole planet rather than just the surface of the Earth.

And here’s an ambiguity. Is the surface a part of the ball? Should we include the edge, or do we just want the inside? And that depends on what we want to do. Either might be right. If we don’t need the edge, then we have an open set (stick around for Friday). This gives us the open ball. If we do need the edge, then we have a closed set, and so, the closed ball.

Balls are so useful. Take a chunk of space that you find interesting for whatever reason. We can represent that space as the joining together (the “union”) of a bunch of balls. Probably not all the same size, but that’s all right. We might need infinitely many of these balls to get the chunk precisely right, or as close to right as can be. But that’s all right. We can still do it. Most anything we want to analyze is easier to prove on any one of these balls. And since we can describe the complicated shape as this combination of balls, then we can know things about the whole complicated shape. It’s much the way we can know things about polygons by breaking them into triangles, and showing things are true about triangles.

Sphere or ball, whatever you like. We can describe how many dimensions of space the thing occupies with the prefix. The 3-ball is everything close enough to a point that’s in a three-dimensional space. The 2-ball is everything close enough in a two-dimensional space. The 10-ball is everything close enough to a point in a ten-dimensional space. The 3-sphere is … oh, all right. Here we have a little squabble. People doing geometry prefer this to be the sphere in three dimensions. People doing topology prefer this to be the sphere whose surface has three dimensions, that is, the sphere in four dimensions. Usually which you mean will be clear from context: are you reading a geometry or a topology paper? If you’re not sure, oh, look for anything hinting at the number of spatial dimensions. If nothing gives you a hint maybe it doesn’t matter.

Either way, we do want to talk about the family of shapes without committing ourselves to any particular number of dimensions. And so that’s why we fall back on ‘N’. ‘N’ is a good name for “the number of dimensions we’re working in”, and so we use it. Then we have the N-sphere and the N-ball, a sphere-like shape, or a ball-like shape, that’s in however much space we need for the problem.

I mentioned something early on that I bet you paid no attention to. That was that we need a space, and a point inside the space, and some idea of distance. One of the surprising things mathematics teaches us about distance is … there’s a lot of ideas of distance out there. We have what I’ll call an instinctive idea of distance. It’s the one that matches what holding a ruler up to stuff tells us. But we don’t have to have that.

I sense the grumbling already. Yes, sure, we can define distance by some screwball idea, but do we ever need it? To which the mathematician answers, well, what if you’re trying to figure out how far away something in midtown Manhattan is? Where you can only walk along streets or avenues and we pretend Broadway doesn’t exist? Huh? How about that? Oh, fine, the skeptic might answer. Grant that there can be weird cases where the straight-line ruler distance is less enlightening than some other scheme is.

Well, there are. There exists a whole universe of different ideas of distance. There’s a handful of useful ones. The ordinary straight-line ruler one, the Euclidean distance, you get in a method so familiar it’s worth saying what you do. You find the coordinates of your two given points. Take the pairs of corresponding coordinates: the x-coordinates of the two points, the y-coordinates of the two points, the z-coordinates, and so on. Find the differences between corresponding coordinates. Take the absolute value of those differences. Square all those absolute-value differences. Add up all those squares. Take the square root of that. Fine enough.

There’s a lot of novelty acts. For example, do that same thing, only instead of raising the differences to the second power, raise them to the 26th power. When you get the sum, instead of the square root, take the 26th root. There. That’s a legitimate distance. No, you will never need this, but your analysis professor might give you it as a homework problem sometime.

Some are useful, though. Raising to the first power, and then eventually taking the first root, gives us something useful. Yes, raising to a first power and taking a first root isn’t doing anything. We just say we’re doing that for the sake of consistency. Raising to an infinitely large power, and then taking an infinitely great root, inspires angry glares. But we can make that idea rigorous. When we do it gives us something useful.

And here’s a new, amazing thing. We can still make “spheres” for these other distances. On a two-dimensional space, the “sphere” with this first-power-based distance will look like a diamond. The “sphere” with this infinite-power-based distance will look like a square. On a three-dimensional space the “sphere” with the first-power-based distance looks like a … well, more complicated, three-dimensional diamond. The “sphere” with the infinite-power-based distance looks like a box. The “balls” in all these cases look like what you expect from knowing the spheres.

As with the ordinary ideas of spheres and balls these shapes let us understand space. Spheres offer a natural path to understanding difficult points. Balls offer a natural path to understanding complicated shapes. The different ideas of distance change how we represent these, and how complicated they are, but not the fact that we can do it. And it allows us to start thinking of what spheres and balls for more abstract spaces, universes made of polynomials or formed of trig functions, might be. They’re difficult to visualize. But we have the grammar that lets us speak about them now.

And for a postscript: I also wrote about spheres and balls as part of my Set Tour a couple years ago. Here’s the essay about the N-sphere, although I didn’t exactly call it that. And here’s the essay about the N-ball, again not quite called that.

The Summer 2017 Mathematics A To Z: Morse Theory


Today’s A To Z entry is a change of pace. It dives deeper into analysis than this round has been. The term comes from Mr Wu, of the Singapore Maths Tuition blog, whom I thank for the request.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Morse Theory.

An old joke, as most of my academia-related ones are. The young scholar says to his teacher how amazing it was in the old days, when people were foolish, and thought the Sun and the Stars moved around the Earth. How fortunate we are to know better. The elder says, ah yes, but what would it look like if it were the other way around?

There are many things to ponder packed into that joke. For one, the elder scholar’s awareness that our ancestors were no less smart or perceptive or clever than we are. For another, the awareness that there is a problem. We want to know about the universe. But we can only know what we perceive now, where we are at this moment. Even a note we’ve written in the past, or a message from a trusted friend, we can’t take uncritically. What we know is that we perceive this information in this way, now. When we pay attention to our friends in the philosophy department we learn that knowledge is even harder than we imagine. But I’ll stop there. The problem is hard enough already.

We can put it in a mathematical form, one that seems immune to many of the worst problems of knowledge. In this form it looks something like this: if what can we know about the universe, if all we really know is what things in that universe are doing near us? The things that we look at are functions. The universe we’re hoping to understand is the domain of the functions. One filter we use to see the universe is Morse Theory.

We don’t look at every possible function. Functions are too varied and weird for that. We look at functions whose range is the real numbers. And they must be smooth. This is a term of art. It means the function has derivatives. It has to be continuous. It can’t have sharp corners. And it has to have lots of derivatives. The first derivative of a smooth function has to also be continuous, and has to also lack corners. And the derivative of that first derivative has to be continuous, and to lack corners. And the derivative of that derivative has to be the same. A smooth function can can differentiate over and over again, infinitely many times. None of those derivatives can have corners or jumps or missing patches or anything. This is what makes it smooth.

Most functions are not smooth, in much the same way most shapes are not circles. That’s all right. There are many smooth functions anyway, and they describe things we find interesting. Or we think they’re interesting, anyway. Smooth functions are easy for us to work with, and to know things about. There’s plenty of smooth functions. If you’re interested in something else there’s probably a smooth function that’s close enough for practical use.

Morse Theory builds on the “critical points” of these smooth functions. A critical point, in this context, is one where the derivative is zero. Derivatives being zero usually signal something interesting going on. Often they show where the function changes behavior. In freshman calculus they signal where a function changes from increasing to decreasing, so the critical point is a maximum. In physics they show where a moving body no longer has an acceleration, so the critical point is an equilibrium. Or where a system changes from one kind of behavior to another. And here — well, many things can happen.

So take a smooth function. And take a critical point that it’s got. (And, erg. Technical point. The derivative of your smooth function, at that critical point, shouldn’t be having its own critical point going on at the same spot. That makes stuff more complicated.) It’s possible to approximate your smooth function near that critical point with, of course, a polynomial. It’s always polynomials. The shape of these polynomials gives you an index for these points. And that can tell you something about the shape of the domain you’re on.

At least, it tells you something about what the shape is where you are. The universal model for this — based on skimming texts and papers and popularizations of this — is of a torus standing vertically. Like a doughnut that hasn’t tipped over, or like a tire on a car that’s working as normal. I suspect this is the best shape to use for teaching, as anyone can understand it while it still shows the different behaviors. I won’t resist.

Imagine slicing this tire horizontally. Slice it close to the bottom, below the central hole, and the part that drops down is a disc. At least, it could be flattened out tolerably well to a disc.

Slice it somewhere that intersects the hole, though, and you have a different shape. You can’t squash that down to a disc. You have a noodle shape. A cylinder at least. That’s different from what you got the first slice.

Slice the tire somewhere higher. Somewhere above the central hole, and you have … well, it’s still a tire. It’s got a hole in it, but you could imagine patching it and driving on. There’s another different shape that we’ve gotten from this.

Imagine we were confined to the surface of the tire, but did not know what surface it was. That we start at the lowest point on the tire and ascend it. From the way the smooth functions around us change we can tell how the surface we’re on has changed. We can see its change from “basically a disc” to “basically a noodle” to “basically a doughnut”. We could work out what the surface we’re on has to be, thanks to how these smooth functions around us change behavior.

Occasionally we mathematical-physics types want to act as though we’re not afraid of our friends in the philosophy department. So we deploy the second thing we know about Immanuel Kant. He observed that knowing the force of gravity falls off as the square of the distance between two things implies that the things should exist in a three-dimensional space. (Source: I dunno, I never read his paper or book or whatever and dunno I ever heard anyone say they did.) It’s a good observation. Geometry tells us what physics can happen, but what physics does happen tells us what geometry they happen in. And it tells the philosophy department that we’ve heard of Immanuel Kant. This impresses them greatly, we tell ourselves.

Morse Theory is a manifestation of how observable physics teaches us the geometry they happen on. And in an urgent way, too. Some of Edward Witten’s pioneering work in superstring theory was in bringing Morse Theory to quantum field theory. He showed a set of problems called the Morse Inequalities gave us insight into supersymmetric quantum mechanics. The link between physics and doughnut-shapes may seem vague. This is because you’re not remembering that mathematical physics sees “stuff happening” as curves drawn on shapes which represent the kind of problem you’re interested in. Learning what the shapes representing the problem look like is solving the problem.

If you’re interested in the substance of this, the universally-agreed reference is J Milnor’s 1963 text Morse Theory. I confess it’s hard going to read, because it’s a symbols-heavy textbook written before the existence of LaTeX. Each page reminds one why typesetters used to get hazard pay, and not enough of it.

The Summer 2017 Mathematics A To Z: Klein Bottle


Gaurish, of the For The Love Of Mathematics blog, takes me back into topology today. And it’s a challenging one, because what can I say about a shape this involved when I’m too lazy to draw pictures or include photographs most of the time?

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

In 1958 Clifton Fadiman, an open public intellectual and panelist on many fine old-time radio and early TV quiz shows, edited the book Fantasia Mathematica. It’s a pleasant read and you likely can find a copy in a library or university library nearby. It’s a collection of mathematically-themed stuff. Mostly short stories, a few poems, some essays, even that bit where Socrates works through a proof. And some of it is science fiction, this from an era when science fiction was really disreputable.

If there’s a theme to the science fiction stories included it is: Möbius Strips, huh? There are so many stories in the book that amount to, “what is this crazy bizarre freaky weird ribbon-like structure that only has the one side? Huh?” As I remember even one of the non-science-fiction stories is a Möbius Strip story.

I don’t want to sound hard on the writers, nor on Fadiman for collecting what he has. A story has to be about people doing something, even if it’s merely exploring some weird phenomenon. You can imagine people dealing with weird shapes. It’s hard to imagine what story you could tell about an odd perfect number. (Well, that isn’t “here’s how we discovered the odd perfect number”, which amounts to a lot of thinking and false starts. Or that doesn’t make the odd perfect number a MacGuffin, the role equally well served by letters of transit or a heap of gold or whatever.) Many of the stories that aren’t about the Möbius Strip are about four- and higher-dimensional shapes that people get caught in or pass through. One of the hyperdimensional stories, A J Deutsch’s “A Subway Named Möbius”, even pulls in the Möbius Strip. The name doesn’t fit, but it is catchy, and is one of the two best tall tales about the Boston subway system.

Besides, it’s easy to see why the Möbius Strip is interesting. It’s a ribbon where both sides are the same side. What’s not neat about that? It forces us to realize that while we know what “sides” are, there’s stuff about them that isn’t obvious. That defies intuition. It’s so easy to make that it holds another mystery. How is this not a figure known to the ancients and used as a symbol of paradox for millennia? I have no idea; it’s hard to guess why something was not noticed when it could easily have been It dates to 1858, when August Ferdinand Möbius and Johann Bendict Listing independently published on it.

The Klein Bottle is newer by a generation. Felix Klein, who used group theory to enlighten geometry and vice-versa, described the surface in 1882. It has much in common with the Möbius Strip. It’s a thing that looks like a solid. But it’s impossible to declare one side to be outside and the other in, at least not in any logically coherent way. Take one and dab a spot with a magic marker. You could trace, with the marker, a continuous curve that gets around to the same spot on the “other” “side” of the thing. You see why I have to put quotes around “other” and “side”. I believe you know what I mean when I say this. But taken literally, it’s nonsense.

The Klein Bottle’s a two-dimensional surface. By that I mean that could cover it with what look like lines of longitude and latitude. Those coordinates would tell you, without confusion, where a point on the surface is. But it’s embedded in a four-dimensional space. (Or a higher-dimensional space, but everything past the fourth dimension is extravagance.) We have never seen a Klein Bottle in its whole. I suppose there are skilled people who can imagine it faithfully, but how would anyone else ever know?

Big deal. We’ve never seen a tesseract either, but we know the shadow it casts in three-dimensional space. So it is with the Klein Bottle. Visit any university mathematics department. If they haven’t got a glass replica of one in the dusty cabinets welcoming guests to the department, never fear. At least one of the professors has one on an office shelf, probably beside some exams from eight years ago. They make nice-looking jars. Klein Bottles don’t have to. There are different shapes their projection into three dimensions can take. But the only really different one is this sort of figure-eight helical shape that looks like a roller coaster gone vicious. (There’s also a mirror image of this, the helix winding the opposite way.) These representations have the surface cross through itself. In four dimensions, it does no such thing, any more than the edges of a cube cross one another. It’s just the lines in a picture on a piece of paper that cross.

The Möbius Strip is good practice for learning about the Klein Bottle. We can imagine creating a Bottle by the correct stitching-together of two strips. Or, if you feel destructive, we can start with a Bottle and slice it, producing a pair of Möbius Strips. Both are non-orientable. We can’t make a division between one side and another that reflects any particular feature of the shape. One of the helix-like representations of the Klein Bottle also looks like a pool toy-ring version of the Möbius Strip.

And strange things happen on these surfaces. You might remember the four-color map theorem. Four colors are enough to color any two-dimensional map without adjacent territories having to share a color. (This isn’t actually so, as the territories have to be contiguous, with no enclaves of one territory inside another. Never mind.) This is so for territories on the sphere. It’s hard to prove (although the five-color theorem is easy.) Not so for the Möbius Strip: territories on it might need as many as six colors. And likewise for the Klein Bottle. That’s a particularly neat result, as the Heawood Conjecture tells us the Klein Bottle might need seven. The Heawood Conjecture is otherwise dead-on in telling us how many colors different kinds of surfaces need for their map-colorings. The Klein Bottle is a strange surface. And yes, it was easier to prove the six-color theorem on the Klein Bottle than it was to prove the four-color theorem on the plane or sphere.

(Though it’s got the tentative-sounding name of conjecture, the Heawood Conjecture is proven. Heawood put it out as a conjecture in 1890. It took to 1968 for the whole thing to be finally proved. I imagine all those decades of being thought but not proven true gave it a reputation. It’s not wrong for Klein Bottles. If six colors are enough for these maps, then so are seven colors. It’s just that Klein Bottles are the lone case where the bound is tighter than Heawood suggests.)

All that said, do we care? Do Klein Bottles represent something of particular mathematical interest? Or are they imagination-capturing things we don’t really use? I confess I’m not enough of a topologist to say how useful they are. They are easily-understood examples of algebraic or geometric constructs. These are things with names like “quotient spaces” and “deck transformations” and “fiber bundles”. The thought of the essay I would need to write to say what a fiber bundle is makes me appreciate having good examples of the thing around. So if nothing else they are educationally useful.

And perhaps they turn up more than I realize. The geometry of Möbius Strips turns up in many surprising places: music theory and organic chemistry, superconductivity and roller coasters. It would seem out of place if the kinds of connections which make a Klein Bottle don’t turn up in our twisty world.

The Summer 2017 Mathematics A To Z: Cohomology


Today’s A To Z topic is another request from Gaurish, of the For The Love Of Mathematics blog. Also part of what looks like a quest to make me become a topology blogger, at least for a little while. It’s going to be exciting and I hope not to faceplant as I try this.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Also, a note about Thomas K Dye, who’s drawn the banner art for this and for the Why Stuff Can Orbit series: the publisher for collections of his comic strip is having a sale this weekend.

Cohomology.

The word looks intimidating, and faintly of technobabble. It’s less cryptic than it appears. We see parts of it in non-mathematical contexts. In biology class we would see “homology”, the sharing of structure in body parts that look superficially very different. We also see it in art class. The instructor points out that a dog’s leg looks like that because they stand on their toes. What looks like a backward-facing knee is just the ankle, and if we stand on our toes we see that in ourselves. We might see it in chemistry, as many interesting organic compounds differ only in how long or how numerous the boring parts are. The stuff that does work is the same, or close to the same. And this is a hint to what a mathematician means by cohomology. It’s something in shapes. It’s particularly something in how different things might have similar shapes. Yes, I am using a homology in language here.

I often talk casually about the “shape” of mathematical things. Or their “structures”. This sounds weird and abstract to start and never really gets better. We can get some footing if we think about drawing the thing we’re talking about. Could we represent the thing we’re working on as a figure? Often we can. Maybe we can draw a polygon, with the vertices of the shape matching the pieces of our mathematical thing. We get the structure of our thing from thinking about what we can do to that polygon without changing the way it looks. Or without changing the way we can do whatever our original mathematical thing does.

This leads us to homologies. We get them by looking for stuff that’s true even if we moosh up the original thing. The classic homology comes from polyhedrons, three-dimensional shapes. There’s a relationship between the number of vertices, the number of edges, and the number of faces of a polyhedron. It doesn’t change even if you stretch the shape out longer, or squish it down, for that matter slice off a corner. It only changes if you punch a new hole through the middle of it. Or if you plug one up. That would be unsporting. A homology describes something about the structure of a mathematical thing. It might even be literal. Topology, the study of what we know about shapes without bringing distance into it, has the number of holes that go through a thing as a homology. This gets feeling like a comfortable, familiar idea now.

But that isn’t a cohomology. That ‘co’ prefix looks dangerous. At least it looks significant. When the ‘co’ prefix has turned up before it’s meant something is shaped by how it refers to something else. Coordinates aren’t just number lines; they’re collections of number lines that we can use to say where things are. If ‘a’ is a factor of the number ‘x’, its cofactor is the number you multiply ‘a’ by in order to get ‘x’. (For real numbers that’s just x divided by a. For other stuff it might be weirder.) A codomain is a set that a function maps a domain into (and must contain the range, at least). Cosets aren’t just sets; they’re ways we can divide (for example) the counting numbers into odds and evens.

So what’s the ‘co’ part for a homology? I’m sad to say we start losing that comfortable feeling now. We have to look at something we’re used to thinking of as a process as though it were a thing. These things are morphisms: what are the ways we can match one mathematical structure to another? Sometimes the morphisms are easy. We can match the even numbers up with all the integers: match 0 with 0, match 2 with 1, match -6 with -3, and so on. Addition on the even numbers matches with addition on the integers: 4 plus 6 is 10; 2 plus 3 is 5. For that matter, we can match the integers with the multiples of three: match 1 with 3, match -1 with -3, match 5 with 15. 1 plus -2 is -1; 3 plus -6 is -9.

What happens if we look at the sets of matchings that we can do as if that were a set of things? That is, not some human concept like ‘2’ but rather ‘match a number with one-half its value’? And ‘match a number with three times its value’? These can be the population of a new set of things.

And these things can interact. Suppose we “match a number with one-half its value” and then immediately “match a number with three times its value”. Can we do that? … Sure, easily. 4 matches to 2 which goes on to 6. 8 matches to 4 which goes on to 12. Can we write that as a single matching? Again, sure. 4 matches to 6. 8 matches to 12. -2 matches to -3. We can write this as “match a number with three-halves its value”. We’ve taken “match a number with one-half its value” and combined it with “match a number with three times its value”. And it’s given us the new “match a number with three-halves its value”. These things we can do to the integers are themselves things that can interact.

This is a good moment to pause and let the dizziness pass.

It isn’t just you. There is something weird thinking of “doing stuff to a set” as a thing. And we have to get a touch more abstract than even this. We should be all right, but please do not try not to use this to defend your thesis in category theory. Just use it to not look forlorn when talking to your friend who’s defending her thesis in category theory.

Now, we can take this collection of all the ways we can relate one set of things to another. And we can combine this with an operation that works kind of like addition. Some way to “add” one way-to-match-things to another and get a way-to-match-things. There’s also something that works kind of like multiplication. It’s a different way to combine these ways-to-match-things. This forms a ring, which is a kind of structure that mathematicians learn about in Introduction to Not That Kind Of Algebra. There are many constructs that are rings. The integers, for example, are also a ring, with addition and multiplication the same old processes we’ve always used.

And just as we can sort the integers into odds and evens — or into other groupings, like “multiples of three” and “one plus a multiple of three” and “two plus a multiple of three” — so we can sort the ways-to-match-things into new collections. And this is our cohomology. It’s the ways we can sort and classify the different ways to manipulate whatever we started on.

I apologize that this sounds so abstract as to barely exist. I admit we’re far from a nice solid example such as “six”. But the abstractness is what gives cohomologies explanatory power. We depend very little on the specifics of what we might talk about. And therefore what we can prove is true for very many things. It takes a while to get there, is all.

The End 2016 Mathematics A To Z: Jordan Curve


I realize I used this thing in one of my Theorem Thursday posts but never quite said what it was. Let me fix that.

Jordan Curve

Get a rubber band. Well, maybe you can’t just now, even if you wanted to after I gave orders like that. Imagine a rubber band. I apologize to anyone so offended by my imperious tone that they’re refusing. It’s the convention for pop mathematics or science.

Anyway, take your rubber band. Drop it on a table. Fiddle with it so it hasn’t got any loops in it and it doesn’t twist over any. I want the whole of one edge of the band touching the table. You can imagine the table too. That is a Jordan Curve, at least as long as the rubber band hasn’t broken.

This may not look much like a circle. It might be close, but I bet it’s got some wriggles in its curves. Maybe it even curves so much the thing looks more like a kidney bean than a circle. Maybe it pinches so much that it looks like a figure eight, a couple of loops connected by a tiny bridge on the interior. Doesn’t matter. You can bring out the circle. Put your finger inside the rubber band’s loops and spiral your finger around. Do this gently and the rubber band won’t jump off the table. It’ll round out to as perfect a circle as the limitations of matter allow.

And for that matter, if we wanted, we could take a rubber band laid down as a perfect circle. Then nudge it here and push it there and wrinkle it up into as complicated a figure as you like. Either way is as possible.

A Jordan Curve is a closed curve, a curve that loops around back to itself. And it’s simple. That is, it doesn’t cross over itself at any point. However weird and loopy this figure is, as long as it doesn’t cross over itself, it’s got in a sense the same shape as a circle. We can imagine a function that matches every point on a true circle to a point on the Jordan Curve. A set of points in order on the original circle will match to points in the same order on the Jordan Curve. There’s nothing missing and there’s no jumps or ambiguous points. And no point on the Jordan Curve matches to two or more on the original circle. (This is why we don’t let the curve to cross over itself.)

When I wrote about the Jordan Curve Theorem it was about how to tell how a curve divides a plane into two pieces, an inside and an outside. You can have some pretty complicated-looking figures. I have an example on the Jordan Curve Theorem essay, but you can make your own by doodling. And we can look at it as a circle, as a rubber band, twisted all around.

This all dips into topology, the study of how shapes connect when we don’t care about distance. But there are simple wondrous things to find about them. For example. Draw a Jordan Curve, please. Any that you like. Now draw a triangle. Again, any that you like.

There is some trio of points in your Jordan Curve which connect to a triangle the same shape as the one you drew. It may be bigger than your triangle, or smaller. But it’ll look similar. The angles inside will all be the same as the ones you started with. This should help make doodling during a dull meeting even more exciting.

There may be four points on your Jordan Curve that make a square. I don’t know. Nobody knows for sure. There certainly are if your curve is convex, that is, if no line between any two points on the curve goes outside the curve. And it’s true even for curves that aren’t complex if they are smooth enough. But generally? For an arbitrary curve? We don’t know. It might be true. It might be impossible to find a square in some Jordan Curve. It might be the Jordan Curve you drew. Good luck looking.

Reading the Comics, September 17, 2016: Show Your Work Edition


As though to reinforce how nothing was basically wrong, Comic Strip Master Command sent a normal number of mathematically themed comics around this past week. They bunched the strips up in the first half of the week, but that will happen. It was a fun set of strips in any event.

Rob Harrell’s Adam @ Home for the 11th tells of a teacher explaining division through violent means. I’m all for visualization tools and if we are going to use them, the more dramatic the better. But I suspect Mrs Clark’s students will end up confused about what exactly they’ve learned. If a doll is torn into five parts, is that communicating that one divided by five is five? If the students were supposed to identify the mass of the parts of the torn-up dolls as the result of dividing one by five, was that made clear to them? Maybe it was. But there’s always the risk in a dramatic presentation that the audience will misunderstand the point. The showier the drama the greater the risk, it seems to me. But I did only get the demonstration secondhand; who knows how well it was done?

Greg Cravens’ The Buckets for the 11th has the kid, Toby, struggling to turn a shirt backwards and inside-out without taking it off. As the commenters note this is the sort of problem we get into all the time in topology. The field is about what can we say about shapes when we don’t worry about distance? If all we know about a shape is the ways it’s connected, the number of holes it has, whether we can distinguish one side from another, what else can we conclude? I believe Gocomics.com commenter Mike is right: take one hand out the bottom of the shirt and slide it into the other sleeve from the outside end, and proceed from there. But I have not tried it myself. I haven’t yet started wearing long-sleeve shirts for the season.

Bill Amend’s FoxTrot for the 11th — a new strip — does a story problem featuring pizzas cut into some improbable numbers of slices. I don’t say it’s unrealistic someone might get this homework problem. Just that the story writer should really ask whether they’ve ever seen a pizza cut into sevenths. I have a faint memory of being served a pizza cut into tenths by same daft pizza shop, which implies fifths is at least possible. Sevenths I refuse, though.

Mark Tatulli’s Heart of the City for the 12th plays on the show-your-work directive many mathematics assignments carry. I like Heart’s showiness. But the point of showing your work is because nobody cares what (say) 224 divided by 14 is. What’s worth teaching is the ability to recognize what approaches are likely to solve what problems. What’s tested is whether someone can identify a way to solve the problem that’s likely to succeed, and whether that can be carried out successfully. This is why it’s always a good idea, if you are stumped on a problem, to write out how you think this problem should be solved. Writing out what you mean to do can clarify the steps you should take. And it can guide your instructor to whether you’re misunderstanding something fundamental, or whether you just missed something small, or whether you just had a bad day.

Norm Feuti’s Gil for the 12th, another rerun, has another fanciful depiction of showing your work. The teacher’s got a fair complaint in the note. We moved away from tally marks as a way to denote numbers for reasons. Twelve depictions of apples are harder to read than the number 12. And they’re terrible if we need to depict numbers like one-half or one-third. Might be an interesting side lesson in that.

Brian Basset’s Red and Rover for the 14th is a rerun and one I’ve mentioned in these parts before. I understand Red getting fired up to be an animator by the movie. It’s been a while since I watched Donald Duck in Mathmagic Land but my recollection is that while it was breathtaking and visually inventive it didn’t really get at mathematics. I mean, not at noticing interesting little oddities and working out whether they might be true always, or sometimes, or almost never. There is a lot of play in mathematics, especially in the exciting early stages where one looks for a thing to prove. But it’s also in seeing how an ingenious method lets you get just what you wanted to know. I don’t know that the short demonstrates enough of that.

Punkinhead: 'Can you answer an arithmetic question for me, Julian?' Julian: 'Sure.' Punkinhead: 'What is it?'
Bud Blake’s Tiger rerun for the 15th of September, 2016. I don’t get to talking about the art of the comics here, but, I quite like Julian’s expressions here. And Bud Blake drew fantastic rumpled clothes.

Bud Blake’s Tiger rerun for the 15th gives Punkinhead the chance to ask a question. And it’s a great question. I’m not sure what I’d say arithmetic is, not if I’m going to be careful. Offhand I’d say arithmetic is a set of rules we apply to a set of things we call numbers. The rules are mostly about how we can take two numbers and a rule and replace them with a single number. And these turn out to correspond uncannily well with the sorts of things we do with counting, combining, separating, and doing some other stuff with real-world objects. That it’s so useful is why, I believe, arithmetic and geometry were the first mathematics humans learned. But much of geometry we can see. We can look at objects and see how they fit together. Arithmetic we have to infer from the way the stuff we like to count works. And that’s probably why it’s harder to do when we start school.

What’s not good about that as an answer is that it actually applies to a lot of mathematical constructs, including those crazy exotic ones you sometimes see in science press. You know, the ones where there’s this impossibly complicated tangle with ribbons of every color and a headline about “It’s Revolutionary. It’s 46-Dimensional. It’s Breaking The Rules Of Geometry. Is It The Shape That Finally Quantizes Gravity?” or something like that. Well, describe a thing vaguely and it’ll match a lot of other things. But also when we look to new mathematical structures, we tend to look for things that resemble arithmetic. Group theory, for example, is one of the cornerstones of modern mathematical thought. It’s built around having a set of things on which we can do something that looks like addition. So it shouldn’t be a surprise that many groups have a passing resemblance to arithmetic. Mathematics may produce universal truths. But the ones we see are also ones we are readied to see by our common experience. Arithmetic is part of that common experience.

'Dude, you have something on your face.' 'Food? Ink? Zit? What??' 'I think it's math.' 'Oh, yeah. I fell asleep on my Calculus book.'
Jerry Scott and Jim Borgman’s Zits for the 14th of September, 2016. Properly speaking that is ink on his face, but I suppose saying it’s calculus pins down where it came from. Just observing.

Also Jerry Scott and Jim Borgman’s Zits for the 14th I think doesn’t really belong here. It’s just got a cameo appearance by the concept of mathematics. Dave Whamond’s Reality Check for the 17th similarly just mentions the subject. But I did want to reassure any readers worried after last week that Pierce recovered fine. Also that, you know, for not having a stomach for mathematics he’s doing well carrying on. Discipline will carry one far.

Theorem Thursday: The Five-Color Map Theorem


People think mathematics is mostly counting and arithmetic. It’s what we get at when we say “do the math[s]”. It’s why the mathematician in the group is the one called on to work out what the tip should be. Heck, I attribute part of my love for mathematics to a Berenstain Bears book which implied being a mathematician was mostly about adding up sums in a base on the Moon, which is an irresistible prospect. In fact, usually counting and arithmetic are, at least, minor influences on real mathematics. There are legends of how catastrophically bad at figuring mathematical genius can be. But usually isn’t always, and this week I’d like to show off a case where counting things and adding things up lets us prove something interesting.

The Five-Color Map Theorem.

No, not four. I imagine anyone interested enough to read a mathematics blog knows the four-color map theorem. It says that you only need four colors to color a map. That’s true, given some qualifiers. No discontiguous chunks that need the same color. Two regions with the same color can touch at a point, they just can’t share a line or curve. The map is on a plane or the surface of a sphere. Probably some other requirements. I’m not going to prove that. Nobody has time for that. The best proofs we’ve figured out for it amount to working out how every map fits into one of a huge number of cases, and trying out each case. It’s possible to color each of those cases with only four colors, so, we’re done. Nice but unenlightening and way too long to deal with.

The five-color map theorem is a lot like the four-color map theorem, with this difference: it says that you only need five colors to color a map. Same qualifiers as before. Yes, it’s true because the four-color map theorem is true and because five is more than four. We can do better than that. We can prove five colors are enough even without knowing whether four colors will do. And it’s easy. The ease of the five-color map theorem gave people reason to think four colors would be maybe harder but still manageable.

The proof I want to show uses one of mathematicians’ common tricks. It employs the same principle which Hercules used to slay the Hydra, although it has less cauterizing lake-monster flesh with flaming torches, as that’s considered beneath the dignity of the Academy anymore except when grading finals for general-requirements classes. The part of the idea we do use is to take a problem which we might not be able to do and cut it down to one we can do. Properly speaking this is a kind of induction proof. In those we start from problems we can do and show that if we can do those, we can do all the complicated problems. But we come at it by cutting down complicated problems and making them simple ones.

So suppose we start with a map that’s got some huge number of territories to color. I’m going to start with the United States states which were part of the Dominion of New England. As I’m sure I don’t need to remind any readers, American or otherwise, this was a 17th century attempt by the English to reorganize their many North American colonies into something with fewer administrative irregularities. It lasted almost long enough for the colonists to hear about it. At that point the Glorious Revolution happened (not involving the colonists) and everybody went back to what they were doing before.

Please enjoy my little map of the place. It gives all the states a single color because I don’t really know how to use QGIS and it would probably make my day job easier if I did. (Well, QGIS is open-source software, so its interface is a disaster and its tutorials gibberish. The only way to do something with it is to take flaming torches to it.)

Map showing New York, New Jersey, and New England (Connecticut, Rhode Island, Massachusetts, Vermont, New Hampshire, and Maine) in a vast white space.
States which, in their 17th-century English colonial form, were part of the Dominion of New England (1685-1689). More or less. If I’ve messed up don’t tell me as it doesn’t really matter for this problem.

There’s eight regions here, eight states, so it’s not like we’re at the point we can’t figure how to color this with five different colors. That’s all right. I’m using this for a demonstration. Pretend the Dominion of New England is so complicated we can’t tell whether five colors are enough. Oh, and a spot of lingo: if five colors are enough to color the map we say the map is “colorable”. We say it’s “5-colorable” if we want to emphasize five is enough colors.

So imagine that we erase the border between Maine and New Hampshire. Combine them into a single state over the loud protests of the many proud, scary Mainers. But if this simplified New England is colorable, so is the real thing. There’s at least one color not used for Greater New Hampshire, Vermont, or Massachusetts. We give that color to a restored Maine. If the simplified map can be 5-colored, so can the original.

Maybe we can’t tell. Suppose the simplified map is still too complicated to make it obvious. OK, then. Cut out another border. How about we offend Roger Williams partisans and merge Rhode Island into Massachusetts? Massachusetts started out touching five other states, which makes it a good candidate for a state that needed a sixth color. With Rhode Island reduced to being a couple counties of the Bay State, Greater Massachusetts only touches four other states. It can’t need a sixth color. There’s at least one of our original five that’s free.

OK, but, how does that help us find a color for Rhode Island? Maine it’s easy to see why there’s a free color. But Rhode Island?

Well, it’ll have to be the same color as either Greater New Hampshire or Vermont or New York. At least one of them has to be available. Rhode Island doesn’t touch them. Connecticut’s color is out because Rhode Island shares a border with it. Same with Greater Massachusetts’s color. But we’ve got three colors for the taking.

But is our reduced map 5-colorable? Even with Maine part of New Hampshire and Rhode Island part of Massachusetts it might still be too hard to tell. There’s six territories in it, after all. We can simplify things a little. Let’s reverse the treason of 1777 and put Vermont back into New York, dismissing New Hampshire’s claim on the territory as obvious absurdity. I am never going to be allowed back into New England. This Greater New York needs one color for itself, yes. And it touches four other states. But these neighboring states don’t touch each other. A restored Vermont could use the same color as New Jersey or Connecticut. Greater Massachusetts and Greater New Hampshire are unavailable, but there’s still two choices left.

And now look at the map we have remaining. There’s five states in it: Greater New Hampshire, Greater Massachusetts, Greater New York, Regular Old Connecticut, and Regular old New Jersey. We have five colors. Obviously we can give the five territories different colors.

This is one case, one example map. That’s all we need. A proper proof makes things more abstract, but uses the same pattern. Any map of a bunch of territories is going to have at least one territory that’s got at most five neighbors. Maybe it will have several. Look for one of them. If you find a territory with just one neighbor, such as Maine had, remove that border. You’ve got a simpler map and there must be a color free for the restored territory.

If you find a territory with just two neighbors, such as Rhode Island, take your pick. Merge it with either neighbor. You’ll still have at least one color free for the restored territory. With three neighbors, such as Vermont or Connecticut, again you have your choice. Merge it with any of the three neighbors. You’ll have a simpler map and there’ll be at least one free color.

If you have four neighbors, the way New York has, again pick a border you like and eliminate that. There is a catch. You can imagine one of the neighboring territories reaching out and wrapping around to touch the original state on more than one side. Imagine if Massachusetts ran far out to sea, looped back through Canada, and came back to touch New Jersey, Vermont from the north, and New York from the west. That’s more of a Connecticut stunt to pull, I admit. But that’s still all right. Most of the colonies tried this sort of stunt. And even if Massachusetts did that, we would have colors available. It would be impossible for Vermont and New Jersey to touch. We’ve got a theorem that proves it.

Yes, it’s the Jordan Curve Theorem, here to save us right when we might get stuck. Just like I promised last week. In this case some part of the border of New York and Really Big Massachusetts serves as our curve. Either Vermont or New Jersey is going to be inside that curve, and the other state is outside. They can’t touch. Thank you.

If you have five neighbors, the way Massachusetts has, well, maybe you’re lucky. We are here. None of its neighboring states touches more than two others. We can cut out a border easily and have colors to spare. But we could be in trouble. We could have a map in which all the bordering states touch three or four neighbors and that seems like it would run out of colors. Let me show a picture of that.

The map shows a pentagonal region A which borders five regions, B, C, D, E, and F. Each of those regions borders three or four others. B is entirely enclosed by regions A, C, and D, although from B's perspective they're all enclosed by it.
A hypothetical map with five regions named by an uninspired committee.

So this map looks dire even when you ignore that line that looks like it isn’t connected where C and D come together. Flood fill didn’t run past it, so it must be connected. It just doesn’t look right. Everybody has four neighbors except the province of B, which has three. The province of A has got five. What can we do?

Call on the Jordan Curve Theorem again. At least one of the provinces has to be landlocked, relative to the others. In this case, the borders of provinces A, D, and C come together to make a curve that keeps B in the inside and E on the outside. So we’re free to give B and E the same color. We treat this in the proof by doing a double merger. Erase the boundary between provinces A and B, and also that between provinces A and E. (Or you might merge B, A, and F together. It doesn’t matter. The Jordan Curve Theorem promises us there’ll be at least one choice and that’s all we need.)

So there we have it. As long as we have a map that has some provinces with up to five neighbors, we can reduce the map. And reduce it again, if need be, and again and again. Eventually we’ll get to a map with only five provinces and that has to be 5-colorable.

Just … now … one little nagging thing. We’re relying on there always being some province with at most five neighbors. Why can’t there be some horrible map where every province has six or more neighbors?

Counting will tell us. Arithmetic will finish the job. But we have to get there by way of polygons.

That is, the easiest way to prove this depends on a map with boundaries that are all polygons. That’s all right. Polygons are almost the polynomials of geometry. You can make a polygon that looks so much like the original shape the eye can’t tell the difference. Look at my Dominion of New England map. That’s computer-rendered, so it’s all polygons, and yet all those shore and river boundaries look natural.

But what makes up a polygon? Well, it’s a bunch of straight lines. We call those ‘edges’. Each edge starts and ends at a corner. We call those ‘vertices’. These edges come around and close together to make a ‘face’, a territory like we’ve been talking about. We’re going to count all the regions that have a certain number of neighboring other regions.

Specifically, F2 will represent however many faces there are that have two sides. F3 will represent however many faces there are that have three sides. F4 will represent however many faces there are that have four sides. F10 … yeah, you got this.

One thing you didn’t get. The outside counts as a face. We need this to make the count come out right, so we can use some solid-geometry results. In my map that’s the vast white space that represents the Atlantic Ocean, the other United States, the other parts of Canada, the Great Lakes, all the rest of the world. So Maine, for example, belongs to F2 because it touches New Hampshire and the great unknown void of the rest of the universe. Rhode Island belongs to F3 similarly. New Hampshire’s in F4.

Any map has to have at least one thing that’s in F2, F3, F4, or F5. They touch at most two, three, four or five neighbors. (If they touched more, they’d represent a face that was a polygon of even more sides.)

How do we know? It comes from Euler’s Formula, which starts out describing the ways corners and edges and faces of a polyhedron fit together. Our map, with its polygon on the surface of the sphere, turns out to be just as good as a polyhedron. It looks a little less blocky, but that doesn’t show.

By Euler’s Formula, there’s this neat relationship between the number of vertices, the number of edges, and the number of faces in a polyhedron. (This is the same Leonhard Euler famous for … well, everything in mathematics, really. But in this case it’s for his work with shapes.) It holds for our map too. Call the number of vertices V. Call the number of edges E. Call the number of faces F. Then:

V - E + F = 2

Always true. Try drawing some maps yourself, using simple straight lines, and see if it works. For that matter, look at my Really Really Simplified map and see if it doesn’t hold true still.

One of those blocky diagrams of New York, New Jersey, and New England, done in that way transit maps look, only worse because I'm not so good at this.
A very simplified blocky diagram of my Dominion of New England, with the vertices and edges highlighted so they’re easy to count if you want to do that.

Here’s one of those insights that’s so obvious it’s hard to believe. Every edge ends in two vertices. Three edges meet at every vertex. (We don’t have more than three territories come together at a point. If that were to happen, we’d change the map a little to find our coloring and then put it back afterwards. Pick one of the territories and give it a disc of area from the four or five or more corners. The troublesome corner is gone. Once we’ve done with our proof, shrink the disc back down to nothing. Coloring done!) And therefore 2E = 3V .

A polygon has the same number of edges as vertices, and if you don’t believe that then draw some and count. Every edge touches exactly two regions. Every vertex touches exactly three edges. So we can rework Euler’s formula. Multiply it by six and we get 6V - 6E + 6F = 12 . And from doubling the equation about edges and vertices equation in the last paragraph, 4E = 6V . So if we break up that 6E into 4E and 2E we can rewrite that Euler’s formula again. It becomes 6V - 4E - 2E + 6F = 12. 6V – 4E is zero, so, -2E + 6F = 12 .

Do we know anything about F itself?

Well, yeah. F = F_2 + F_3 + F_4 + F_5 + F_6 + \cdots . The number of faces has to equal the sum of the number of faces of two edges, and of three edges, and of four edges, and of five edges, and of six edges, and on and on. Counting!

Do we know anything about how E and F relate?

Well, yeah. A polygon in F2 has two edges. A polygon in F3 has three edges. A polygon in F4 has four edges. And each edge runs up against two faces. So therefore 2E = 2F_2 + 3F_3 + 4F_4 + 5F_5 + 6F_6 + \cdots . This goes on forever but that’s all right. We don’t need all these terms.

Because here’s what we do have. We know that -2E + 6F = 12 . And we know how to write both E and F in terms of F2, F3, F4, and so on. We’re going to show at least one of these low-subscript Fsomethings has to be positive, that is, there has to be at least one of them.

Start by just shoving our long sum expressions into the modified Euler’s Formula we had. That gives us this:

-(2F_2 + 3F_3 + 4F_4 + 5F_5 + 6F_6 + \cdots) + 6(F_2 + F_3 + F_4 + F_5 + F_6 + \cdots) = 12

Doesn’t look like we’ve got anywhere, does it? That’s all right. Multiply that -1 and that 6 into their parentheses. And then move the terms around, so that we group all the terms with F2 together, and all the terms with F3 together, and all the terms with F4 together, and so on. This gets us to:

(-2 + 6) F_2 + (-3 + 6) F_3 + (-4 + 6) F_4 + (-5 + 6) F_5  + (-6 + 6) F_6 + (-7 + 6) F_7 + (-8 + 6) F_8 + \cdots = 12

I know, that’s a lot of parentheses. And it adds negative numbers to positive which I guess we’re allowed to do but who wants to do that? Simplify things a little more:

4 F_2 + 3 F_3 + 2 F_4 + 1 F_5 + 0 F_6 - 1 F_7 - 2 F_8 - \cdots = 12

And now look at that. Each Fsubscript has to be zero or a positive number. You can’t have a negative number of shapes. If you can I don’t want to hear about it. Most of those Fsubscript‘s get multiplied by a negative number before they’re added up. But the sum has to be a positive number.

There’s only one way that this sum can be a positive number. At least one of F2, F3, F4, or F5 has to be a positive number. So there must be at least one region with at most five neighbors. And that’s true without knowing anything about our map. So it’s true about the original map, and it’s true about a simplified map, and about a simplified-more map, and on and on.

And that is why this hydra-style attack method always works. We can always simplify a map until it obviously can be colored with five colors. And we can go from that simplified map back to the original map, and color it in just fine. Formally, this is an existence proof: it shows there must be a way to color a map with five colors. But it does so the devious way, by showing a way to color the map. We don’t get enough existence proofs like that. And, at its critical point, we know the proof is true because we can count the number of regions and the number of edges and the number of corners they have. And we can add and subtract those numbers in the right way. Just like people imagine mathematicians do all day.

Properly this works only on the surface of a sphere. Euler’s Formula, which we use for the proof, depends on that. We get away with it on a piece of paper because we can pretend this is just a part of the globe so small we don’t see how flat it is. The vast white edge we suppose wraps around the whole world. And that’s fine since we mostly care about maps on flat surfaces or on globes. If we had a map that needed three dimensions, like one that looked at mining and water and overflight and land-use rights, things wouldn’t be so easy. Nor would they work at all if the map turned out to be on an exotic shape like a torus, a doughnut shape.

But this does have a staggering thought. Suppose we drew boundary lines. And suppose we found an arrangement of them so that we needed more than five colors. This would tell us that we have to be living on a surface such as a torus, the doughnut shape. We could learn something about the way space is curved by way of an experiment that never looks at more than where two regions come together. That we can find information about the whole of space, global information, by looking only at local stuff amazes me. I hope it at least surprises you.

From fiddling with this you probably figure the four-color map theorem should follow right away. Maybe involve a little more arithmetic but nothing too crazy. I agree, it so should. It doesn’t. Sorry.

Theorem Thursday: The Jordan Curve Theorem


There are many theorems that you have to get fairly far into mathematics to even hear of. Often they involve things that are so abstract and abstruse that it’s hard to parse just what we’re studying. This week’s entry is not one of them.

The Jordan Curve Theorem.

There are a couple of ways to write this. I’m going to fall back on the version that Richard Courant and Herbert Robbins put in the great book What Is Mathematics?. It’s a theorem in the field of topology, the study of how shapes interact. In particular it’s about simple, closed curves on a plane. A curve is just what you figure it should be. It’s closed if it … uh … closes, makes a complete loop. It’s simple if it doesn’t cross itself or have any disconnected bits. So, something you could draw without lifting pencil from paper and without crossing back over yourself. Have all that? Good. Here’s the theorem:

A simple closed curve in the plane divides that plane into exactly two domains, an inside and an outside.

It’s named for Camille Jordan, a French mathematician who lived from 1838 to 1922, and who’s renowned for work in group theory and topology. It’s a different Jordan from the one named in Gauss-Jordan Elimination, which is a matrix thing that’s important but tedious. It’s also a different Jordan from Jordan Algebras, which I remember hearing about somewhere.

The Jordan Curve Theorem is proved by reading its proposition and then saying, “Duh”. This is compelling, although it lacks rigor. It’s obvious if your curve is a circle, or a slightly squished circle, or a rectangle or something like that. It’s less obvious if your curve is a complicated labyrinth-type shape.

A labyrinth drawn in straight and slightly looped lines.
A generic complicated maze shape. Can you pick out which part is the inside and which the outside? Pretend you don’t notice that little peninsula thing in the upper right corner. I didn’t mean the line to overlap itself but I was using too thick a brush in ArtRage and didn’t notice before I’d exported the image.

It gets downright hard if the curve has a lot of corners. This is why a completely satisfying rigorous proof took decades to find. There are curves that are nowhere differentiable, that are nothing but corners, and those are hard to deal with. If you think there’s no such thing, then remember the Koch Snowflake. That’s that triangle sticking up from the middle of a straight line, that itself has triangles sticking up in the middle of its straight lines, and littler triangles still sticking up from the straight lines. Carry that on forever and you have a shape that’s continuous but always changing direction, and this is hard to deal with.

Still, you can have a good bit of fun drawing a complicated figure, then picking a point and trying to work out whether it’s inside or outside the curve. The challenging way to do that is to view your figure as a maze and look for a path leading outside. The easy way is to draw a new line. I recommend doing that in a different color.

In particular, draw a line from your target point to the outside. Some definitely outside point. You need the line to not be parallel to any of the curve’s line segments. And it’s easier if you don’t happen to intersect any vertices, but if you must, we’ll deal with that two paragraphs down.

A dot with a testing line that crosses the labyrinth curve six times, and therefore is outside the curve.
A red dot that turns out to be outside the labyrinth, based on the number of times the testing line, in blue, crosses the curve. I learned doing this that I should have drawn the dot and blue line first and then fit a curve around it so I wouldn’t have to work so hard to find one lousy point and line segment that didn’t have some problems.

So draw your testing line here from the point to something definitely outside. And count how many times your testing line crosses the original curve. If the testing line crosses the original curve an even number of times then the original point was outside the curve. If the testing line crosses the original an odd number of times then the original point was inside of the curve. Done.

If your testing line touches a vertex, well, then it gets fussy. It depends whether the two edges of the curve that go into that vertex stay on the same side as your testing line. If the original curve’s edges stay on the same side of your testing line, then don’t count that as a crossing. If the edges go on opposite sides of the testing line, then that does count as one crossing. With that in mind, carry on like you did before. An even number of crossings means your point was outside. An odd number of crossings means your point was inside.

The testing line touches a corner of the curve. The curve comes up to and goes away from the same side as the testing line.
This? Doesn’t count as the blue testing line crossing the black curve.

The testing line touches a corner of the curve. The curve crosses over, with legs on either side of the testing line at that point.
This? This counts as the blue testing line crossing the black curve.

So go ahead and do this a couple times with a few labyrinths and sample points. It’s fun and elevates your doodling to the heights of 19th-century mathematics. Also once you’ve done that a couple times you’ve proved the Jordan curve theorem.

Well, no, not quite. But you are most of the way to proving it for a special case. If the curve is a polygon, a shape made up of a finite number of line segments, then you’ve got almost all the proof done. You have to finish it off by choosing a ray, a direction, that isn’t parallel to any of the polygon’s line segments. (This is one reason this method only works for polygons, and fails for stuff like the Koch Snowflake. It also doesn’t work well with space-filling curves, which are things that exist. Yes, those are what they sound like: lines that squiggle around so much they fill up area. Some can fill volume. I swear. It’s fractal stuff.) Imagine all the lines that are parallel to that ray. There’s definitely some point along that line that’s outside the curve. You’ll need that for reference. Classify all the points on that line by whether there’s an even or an odd number of crossings between a starting point and your reference definitely-outside point. Keep doing that for all these many parallel lines.

And that’s it. The mess of points that have an odd number of intersections are the inside. The mess of points that have an even number of intersections are the outside.

You won’t be surprised to know there’s versions of the Jordan curve theorem for solid objects in three-dimensional space. And for hyperdimensional spaces too. You can always work out an inside and an outside, as long as space isn’t being all weird. But it might sound like it’s not much of a theorem. So you can work out an inside and an outside; so what?

But it’s one of those great utility theorems. It pops in to places, the perfect tool for a problem you were just starting to notice existed. If I can get my rhetoric organized I hope to show that off next week, when I figure to do the Five-Color Map Theorem.

The Poincaré Homology Sphere, and Thinking What I’ll Do Next


Yenergy was good enough to write a comment about this, but people might have missed it. “Dodecahedral construction of the Poincaré homology sphere, part II” is up. The post is an illustration trying to describe several pages of the 1979 paper Eight Faces Of The Poincaré Homology 3-Sphere by R C Kirby and M G Sharlemann.

I admit I have to read it almost the same way a non-mathematician would. My education never took me into topology deep enough to be fluent in the notation or the working assumptions behind the paper. I may work my way farther than a non-mathematician, since I’ve been exposed to some of the symbols. The grammar of the argument is familiar. And many points of it are common to fields I did study. Nevertheless, even if you just skim the text, skipping over anything that seems too hard to follow, and look at the illustrations you’ll get something from it.

Past that, I wanted to thank everyone for seeing me into the start of May. I am figuring to give up the post-a-day schedule. It’s exciting to have three thousand-word and four posts of more variable lengths each week, but I need to relax that schedule some. I am considering, based on the conversation I got into with Elke Stangl about the Yukawa Potential, whether to do a string of essays about closed orbits. That would almost surely involve many more equations than is normal around here. But it could make for a nice change of pace.

Reading the Comics, February 11, 2016: Apples And Pointing Things Out Edition


I didn’t expect quite so many mathematically themed comic strips so soon after the last round. Most of them just highlight one or another familiar joke. So this edition is mostly just noting that yeah, the joke is there and has been successfully made. There’s an exception, though. Enjoy.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 7th of February is a cute chart. It’s got an unusual label to the x-axis. Now that I’ve seen it, I’m surprised not to see more jokes constructed this way.

Ruben Bolling’s Super-Fun-Pak Comix for the 7th of February was this essay’s Schrödinger’s Cat mention. I’m considering putting a moratorium on Schrödinger’s Cat strips, at least for a little while. I need to find something fresh to say about them.

Russell Myers’s Broom Hilda for the 8th of February inspires a Fermi problem. These are named for the great physicist Enrico Fermi, who often asked problems of estimation and order of magnitude. Given a few pieces of information, can you say about how big something might be? In this case, how many hours of work are spent peeling labels off grocery store apples? If we had the right information it would be easy to answer. How long does the average label take to peel off? How many apples get peeled each year? I admit not knowing either offhand. I would guess the average label-peeling time to be under five seconds, but if I wanted to be exact I’d get a bag, a stopwatch, and a sheet of paper for notes.

How many apples get peeled each year? That’s tougher. We might be able to get the total number of apples sold. But not every apple is sold with a label on it. A bag of apples doesn’t need individual labels, after all. But we might estimate what fraction of apples are sold loose and thus with labels by looking in local supermarkets. That requires assuming the turnover of apple stock is about the same whether the apple’s labelled or unlabelled. It also assumes our local supermarket is representative of the whole nation’s. But if we’re just looking for an idea of how big the number should be, or if we’re looking for what further information we have to determine, that’s good enough.

Wikipedia says the United States produced 4,100,046 metric tons of apples in 2012, the last year they have records for. If an apple is about a fifth of a kilogram, then, that implies something like 2,050,230,000 apples got sold in the United States that year. Let’s guess that three-quarters of them go right to industrial uses, into the hands of the Apple Pie Trusts and other corporate uses that don’t need labelling, while the remaining quarter go to consumers. That’s a wild guess on my part, but, industry is big. And of those, I’ll guess two-fifths get sold individually, with labels on. The rest can be sold in bags or whatnot. I’m basing that on what I kind of remember from my last trip to the farmer’s market with the free coffee bar and the bag-your-own candies.

So this implies something like 205,023,000 apples could be sold with labels. And if each label takes an average of five seconds, then this implies a total of 17,085,250 minutes spent unpeeling apple labels. That sounds like a big number, but it’s really only over 284,754 hours, or not quite 11,865 days. Of course, divided up among all the apple-eaters it’s not so much per year.

My number is wrong. I picked important bits of information out of thin air. But if I want to be more precise, I have an idea of what I need to learn. And I have an idea of how big I should expect the right answer to be. I can go from this to a better estimate, if I think now it’s worth being more exact.

Stephan Pastis’s Pearls Before Swine tries picking a fight with mathematicians on the 8th of February, with Rat boasting how he’s never used algebra. I’m not sure why bragging about not using algebra is supposed to be funny. The strip says it’s cathartic. I suppose. But it’s a joke that’s been told many times over and this doesn’t feel like a fresh use.

Rick Stromoski’s Soup To Nutz for the 8th of February is a fractions joke. Royboy perceives a difference between one-half of an orange and four-eighths of an orange. I can’t say there isn’t a difference in connotation between the two representations.

Percy Crosby’s Skippy for the 9th of February (a rerun from sometime 1928) shows Sookie with a ball. Well, a ball with a hole in it. A topologist would agree. If you’re interested in how the points on, or inside, an object connect to each other then a hoop like this is the same as a ball with a hole through it or a doughnut or bagel. This is my favorite for this group, because of the wonderful convergence of kid logic and serious mathematics.

Larry Wright’s Motley Classics for the 10th of February (a rerun from that date in 1988) is a joke about the terrors of word problems. I’m not convinced an authentic child would have trouble adding up all those cookies.

Hector D Cantu and Carlos Castellanos’s Baldo for the 11th of February reveals they have a week’s more lead time than most of the comics on the page.

Reading the Comics, August 14, 2015: Name-Dropping Edition


There have been fewer mathematically-themed comic strips than usual the past week, but they have been coming in yet. This week seems to have included a fair number of name-drops of interesting mathematical concepts.

David L Hoyt and Jeff Knurek’s Jumble (August 10) name-drops the abacus. It has got me wondering about how abacuses were made in the pre-industrial age. On the one hand they could in principle be made by anybody who has beads and rods. On the other hand, a skillfully made abacus will make the tool so much more effective. Who made and who sold them? I honestly don’t know.

He needed a partner to build a new abacus business, and his buddy said _____ __ __.
David L Hoyt and Jeff Knurek’s Jumble for the 10th of August, 2015. The link will likely expire around the 10th of September.

Mick Mastroianni and Mason Mastroianni’s Dogs of C Kennel (August 11) has Tucker reveal that most of the mathematics he scrawls is just to make his work look harder. I suspect Tucker overdid his performance. My experience is you can get the audience’s eyes to glaze over with much less mathematics on the board.

Leigh Rubin’s Rubes (August 11) mentions chaos theory. It’s not properly speaking a Chaos Butterfly comic strip. But certainly it’s in the vicinity.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (August 11) name-drops Banach-Tarski. This is a reference to a famous-in-some-circles theorem, or paradox. The theorem, published in 1924 by Stefan Banach and Alfred Tarski, shows something astounding. It’s possible to take a ball, and disassemble it into a number of pieces. Then, doing nothing more than sliding and rotating the pieces, one can reassemble the pieces to get two balls each with the same volume of the original. If that doesn’t sound ridiculous enough, consider that it’s possible to do this trick by cutting the ball into as few as five pieces. (Four, if you’re willing to exclude the exact center of the original ball.) So you can see why this is called a paradox, and why this joke works for people who know the background.

Scott Hilburn’s The Argyle Sweater (August 12) illustrates that joke about rounding up the cattle you might have seen going around.

Reading the Comics, April 22, 2015: April 21, 2015 Edition


I try to avoid doing Reading The Comics entries back-to-back since I know they can get a bit repetitive. How many ways can I say something is a student-resisting-the-word-problem joke? But if Comic Strip Master Command is going to send a half-dozen strips at least mentioning mathematical topics in a single day, how can I resist the challenge? Worse, what might they have waiting for me tomorrow? So here’s a bunch of comic strips from the 21st of April, 2015:

Mark Anderson’s Andertoons plays on the idea of a number being used up. I’m most tickled by this one. I have heard that the New York Yankees may be running short on uniform numbers after having so many retired. It appears they’ve only retired 17 numbers, but they do need numbers for a 40-player roster as well as managers and coaches and other participants. Also, and this delights me, two numbers are retired for two people each. (Number 8, for Yogi Berra and Bill Dickey, and Number 42, for Jackie Robinson and Mariano Rivera.)

Continue reading “Reading the Comics, April 22, 2015: April 21, 2015 Edition”

Reading The Comics, September 24, 2014: Explained In Class Edition


I’m a fan of early 20th century humorist Robert Benchley. You might not be yourself, but it’s rather likely that among the humorists you do like are a good number of people who are fans of his. He’s one of the people who shaped the modern American written-humor voice, and as such his writing hasn’t dated, the way that, for example, a 1920s comic strip will often seem to come from a completely different theory of what humor might be. Among Benchley’s better-remembered quotes, and one of those striking insights into humanity, not to mention the best productivity tip I’ve ever encountered, was something he dubbed the Benchley Principle: “Anyone can do any amount of work, provided it isn’t the work he is supposed to be doing at the moment.” One of the comics in today’s roundup of mathematics-themed comics brought the Benchley Principle to mind, and I mean to get to how it did and why.

Eric The Circle (by ‘Griffinetsabine’ this time) (September 18) steps again into the concerns of anthropomorphized shapes. It’s also got a charming-to-me mention of the trapezium, the geometric shape that’s going to give my mathematics blog whatever immortality it shall have.

Bill Watterson’s Calvin and Hobbes (September 20, rerun) dodged on me: I thought after the strip from the 19th that there’d be a fresh round of explanations of arithmetic, this time including imaginary numbers like “eleventeen” and “thirty-twelve” and the like. Not so. After some explanation of addition by Calvin’s Dad,
Spaceman Spiff would take up the task on the 22nd of smashing together Mysterio planets 6 and 5, which takes a little time to really get started, and finally sees the successful collision of the worlds. Let this serve as a reminder: translating a problem to a real-world application can be a fine way to understand what is wanted, but you have to make sure that in the translation you preserve the result you wanted from the calculation.

Joe has memorized the odds for various poker hands. Four times four, not so much.
Rick Detorie’s One Big Happy for the 21st of September, 2014. I confess ignorance as to whether these odds are accurate.

It’s Rick DeTorie’s One Big Happy (September 21) which brought the Benchley Principle to my mind. Here, Joe is shown to know extremely well the odds of poker hands, but to have no chance at having learned the multiplication table. It seems like something akin to Benchley’s Principle is at work here: Joe memorizing the times tables might be socially approved, but it isn’t what he wants to do, and that’s that. But inspiring the desire to know something is probably the one great challenge facing everyone who means to teach, isn’t it?

Jonathan Lemon’s Rabbits Against Magic (September 21) features a Möbius strip joke that I imagine was a good deal of fun to draw. The Möbius strip is one of those concepts that really catches the imagination, since it seems to defy intuition that something should have only the one side. I’m a little surprise that topology isn’t better-popularized, as it seems like it should be fairly accessible — you don’t need equations to get some surprising results, and you can draw pictures — but maybe I just don’t understand the field well enough to understand what’s difficult about bringing it to a mass audience.

Hector D. Cantu and Carlos Castellanos’s Baldo (September 23) tells a joke about percentages and students’ self-confidence about how good they are with “numbers”. In strict logic, yes, the number of people who say they are and who say they aren’t good at numbers should add up to something under 100 percent. But people don’t tend to be logically perfect, and are quite vulnerable to the way questions are framed, so the scenario is probably more plausible in the real world than the writer intended.

Steve Moore’s In The Bleachers (September 23) falls back on the most famous of all equations as representative of “something it takes a lot of intelligence to understand”.

%d bloggers like this: