My 2019 Mathematics A To Z: Category Theory


Today’s A To Z term is category theory. It was suggested by aajohannas, on Twitter as @aajohannas. It’s a topic I have long wanted to know better, and that every year or so I make a new attempt to try learning without ever feeling like I’ve made progress.

The language of it is beautiful, though. Much of its work is attractive just to see, too, as the field’s developed notation that could be presented as visual art. Much of mathematics could be visual art, yes, but these are art you can almost create in ASCII. It’s amazing.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Category Theory.

What is the most important part of mathematics? Well, the part you wish you understood, yes. But what’s the fundamental part? The piece of mathematics that we could feel most sure an alien intelligence would agree is mathematics?

There’s idle curiosity behind this, yes. It’s a question implicit in some ideals of the Enlightenment. The notion that we should be able to find truths that all beings capable of reason would agree upon, and find themselves. Mathematics seems particularly good for this. If we have something proven by deductive logic from clearly stated axioms and definitions, then we know something true.

There’s practicality too. In the late 19th and early 20th century (western) mathematics tried to find logically rigorous foundations. You might have thought we always had that and, uh, not so much. It turns out a complete rigorous logical proof of even simple stuff takes forever. Mathematicians compose enough of an argument to convince other mathematicians that we could fill in the details. But we still trusted there was a rigorous foundation. The question is, what is it?

A great candidate for this was set theory. This was a great breakthrough. The basic modern idea of set theory builds on bunches of things, called elements. And collections of those things, called sets. And we build rigorous ideas of what it means for elements to be members of sets. This doesn’t sound like much. Powerful ideas never do.

I don’t know that everyone’s intuition is like this. But my gut wants to think a “powerful” result is, like, a great rocket. Some enormous and prominent and mighty thing that blasts through a problem like gravity or an atmosphere. This is almost the opposite of what mathematics means by “powerful”. A rocket is a fiddly, delicate thing. It has millions of components made to tight specifications. It can only launch when lots of conditions are exactly right. A theorem that gives a great result, but has a long list of prerequisites and lemmas that feed into it resembles this. A powerful mathematical result is more like the gravity that the rocket overcomes. It tends to suppose little about the situation, and so it provides results that are applicable over the whole field. Or over a wide field, or a surprising breadth of topics. And, really, mighty as a rocket might be, the gravity it fights is moreso.

So set theory is powerful. It can explain many things. Most amazing is that we can represent arithmetic with it. At least we can get to integers, and all that we do with integers, and that in not too much work. It makes sense that mathematicians latched onto this as critical. It fueled much of the thinking behind the New Math, the infamous attempted United States educational reform of the 1960s and 70s. I grew up in the tail end of this, learning unions and intersections and complements along with times tables and delighted in it.

But even before New Math became a coherent idea there was a better idea. Emmy Noether, mentioned yesterday, is not a part of it. But a part of her insight into physics, and into group theory, was an understanding of structure. That important mathematics results from considering what we can do with sets of things. And what things we can do that produce invariants, things that don’t change. Saunders Mac Lane, one of Noether’s students, and Samuel Eilenberg in the 1940s used what looks to me like this principle. They organized category theory.

Category Theory looks at first like set theory only made terrifying. I’m not very comfortable with it myself. It’s an abstract field, and I’m more at home with stuff I can write a quick Octave program to double-check. Many results in category theory are described, or even proved, with beautiful directed-graph lattices. They show how things relate to one another. This is definitely the field to study if you like drawing arrows.

Part of the statement of a theorem, which includes three squares showing relations between objects labelled A and A', D(I) and D'(I), lim D and lim D'.
I admit I don’t know what point is being proved here. But the square structure illustrated here, and the way successive squares will, say, replace lim D with A, or pI with fI, is almost a sestina. From page 145 of Dr Tom Leinster’s Basic Category Theory.

Just as set theory does, category theory starts with things, called objects. And these objects get piled together into collections. And then there’s another collection of relationships between these collections. These relationships you call maps or morphisms or arrows, based on whatever the first book you kind of understood called them. I’m partial to “maps”. And then we have rules by which these maps compose, that is, where two maps reduce to a single map. This bundle of things — the objects, the collections, and the maps — is a category.

These objects can start out looking like elements, and the collections like sets, and the maps like functions. This gives me, at least, a patch of ground where I feel like I know what I’m doing. But what we need of things to be objects and collections and maps is very little. The result is great power. We can describe set theory in the language of categories. So we can describe arithmetic in category theory. There’s a bit of a hike from the start of category theory to, like, knowing what 18 plus 7 is.

But we’re not bound to anything that concrete. We can describe, for example, groups as categories. This gives us results like when we can factor polynomials. Or whether compass and straightedge can trisect an arbitrary angle. (There’s some work behind this too.) We can describe vector spaces as categories. Heady results like the idea that one function might be orthogonal to another lurk within this field. Manifolds, spaces that work like normal space, are part of the field. So are topological spaces, which tell us about shapes.

If you aren’t yet dizzy then consider this. A category is itself an object. So we can define maps between categories. These we call functors. Which themselves have use in computer science, as a way some kinds of software can be programmed well. More, maps themselves are objects. We can define mappings between maps. These we call natural transformations. Which are the things that Eilenberg and Mac Lane were particularly interested in, to start with. Category theory grew in part out of needing a better understanding of natural transformations.

I do not know what to recommend for people who want to really learn category theory. I haven’t found the textbook or the blog that makes me feel like I am mastering the subject. Writing this essay has introduced me to Dr Tom Leinster’s Basic Category Theory, which I’ve enjoyed skimming. Exercise 3.3.1, for example, seems like exactly the sort of problem I would pose if I knew category theory well enough to write a book on it.

Is this, finally, the mathematics we could be sure an alien would recognize? I’m skeptical, but I always am. It seems to me we build mathematics on arithmetic and geometry. Category theory, seeming to offer explanations of both, is a natural foundation for that. But we are evolved to see the world in terms of number and shape. Of course we see arithmetic and geometry as mathematics. Can we count on every being capable of reason seeing the same things as important? … I admit I can’t imagine a being we might communicate with not recognizing both. But this may say more about the limits of my imagination than about the limits of what could be mathematics.


Thanks for reading. All the Fall 2019 A To Z posts should be at this link. I hope to have the second essay of the week posted Thursday. This year’s and all past A To Z sequences should be at this link. And if you have thoughts about other topics I might cover, please offer suggestions for the letters F through H.

Advertisements

The Summer 2017 Mathematics A To Z: Functor


Gaurish gives me another topic for today. I’m now no longer sure whether Gaurish hopes me to become a topology blogger or a category theory blogger. I have the last laugh, though. I’ve wanted to get better-versed in both fields and there’s nothing like explaining something to learn about it.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Functor.

So, category theory. It’s a foundational field. It talks about stuff that’s terribly abstract. This means it’s powerful, but it can be hard to think of interesting examples. I’ll try, though.

It starts with categories. These have three parts. The first part is a set of things. (There always is.) The second part is a collection of matches between pairs of things in the set. They’re called morphisms. The third part is a rule that lets us combine two morphisms into a new, third one. That is. Suppose ‘a’, ‘b’, and ‘c’ are things in the set. Then there’s a morphism that matches a \rightarrow b , and a morphism that matches b \rightarrow c . And we can combine them into another morphism that matches a \rightarrow c . So we have a set of things, and a set of things we can do with those things. And the set of things we can do is itself a group.

This describes a lot of stuff. Group theory fits seamlessly into this description. Most of what we do with numbers is a kind of group theory. Vector spaces do too. Most of what we do with analysis has vector spaces underneath it. Topology does too. Most of what we do with geometry is an expression of topology. So you see why category theory is so foundational.

Functors enter our picture when we have two categories. Or more. They’re about the ways we can match up categories. But let’s start with two categories. One of them I’ll name ‘C’, and the other, ‘D’. A functor has to match everything that’s in the set of ‘C’ to something that’s in the set of ‘D’.

And it does more. It has to match every morphism between things in ‘C’ to some other morphism, between corresponding things in ‘D’. It’s got to do it in a way that satisfies that combining, too. That is, suppose that ‘f’ and ‘g’ are morphisms for ‘C’. And that ‘f’ and ‘g’ combine to make ‘h’. Then, the functor has to match ‘f’ and ‘g’ and ‘h’ to some morphisms for ‘D’. The combination of whatever ‘f’ matches to and whatever ‘g’ matches to has to be whatever ‘h’ matches to.

This might sound to you like a homomorphism. If it does, I admire your memory or mathematical prowess. Functors are about matching one thing to another in a way that preserves structure. Structure is the way that sets of things can interact. We naturally look for stuff made up of different things that have the same structure. Yes, functors are themselves a category. That is, you can make a brand-new category whose set of things are the functors between two other categories. This is a good spot to pause while the dizziness passes.

There are two kingdoms of functor. You tell them apart by what they do with the morphisms. Here again I’m going to need my categories ‘C’ and ‘D’. I need a morphism for ‘C’. I’ll call that ‘f’. ‘f’ has to match something in the set of ‘C’ to something in the set of ‘C’. Let me call the first something ‘a’, and the second something ‘b’. That’s all right so far? Thank you.

Let me call my functor ‘F’. ‘F’ matches all the elements in ‘C’ to elements in ‘D’. And it matches all the morphisms on the elements in ‘C’ to morphisms on the elmenets in ‘D’. So if I write ‘F(a)’, what I mean is look at the element ‘a’ in the set for ‘C’. Then look at what element in the set for ‘D’ the functor matches with ‘a’. If I write ‘F(b)’, what I mean is look at the element ‘b’ in the set for ‘C’. Then pick out whatever element in the set for ‘D’ gets matched to ‘b’. If I write ‘F(f)’, what I mean is to look at the morphism ‘f’ between elements in ‘C’. Then pick out whatever morphism between elements in ‘D’ that that gets matched with.

Here’s where I’m going with this. Suppose my morphism ‘f’ matches ‘a’ to ‘b’. Does the functor of that morphism, ‘F(f)’, match ‘F(a)’ to ‘F(b)’? Of course, you say, what else could it do? And the answer is: why couldn’t it match ‘F(b)’ to ‘F(a)’?

No, it doesn’t break everything. Not if you’re consistent about swapping the order of the matchings. The normal everyday order, the one you’d thought couldn’t have an alternative, is a “covariant functor”. The crosswise order, this second thought, is a “contravariant functor”. Covariant and contravariant are distinctions that weave through much of mathematics. They particularly appear through tensors and the geometry they imply. In that introduction they tend to be difficult, even mean, creations, since in regular old Euclidean space they don’t mean anything different. They’re different for non-Euclidean spaces, and that’s important and valuable. The covariant versus contravariant difference is easier to grasp here.

Functors work their way into computer science. The avenue here is in functional programming. That’s a method of programming in which instead of the normal long list of commands, you write a single line of code that holds like fourteen “->” symbols that makes the computer stop and catch fire when it encounters a bug. The advantage is that when you have the code debugged it’s quite speedy and memory-efficient. The disadvantage is if you have to alter the function later, it’s easiest to throw everything out and start from scratch, beginning from vacuum-tube-based computing machines. But it works well while it does. You just have to get the hang of it.