The Summer 2017 Mathematics A To Z: Zeta Function


Today Gaurish, of For the love of Mathematics, gives me the last subject for my Summer 2017 A To Z sequence. And also my greatest challenge: the Zeta function. The subject comes to all pop mathematics blogs. It comes to all mathematics blogs. It’s not difficult to say something about a particular zeta function. But to say something at all original? Let’s watch.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Zeta Function.

The spring semester of my sophomore year I had Intro to Complex Analysis. Monday Wednesday 7:30; a rare evening class, one of the few times I’d eat dinner and then go to a lecture hall. There I discovered something strange and wonderful. Complex Analysis is a far easier topic than Real Analysis. Both are courses about why calculus works. But why calculus for complex-valued numbers works is a much easier problem than why calculus for real-valued numbers works. It’s dazzling. Part of this is that Complex Analysis, yes, builds on Real Analysis. So Complex can take for granted some things that Real has to prove. I didn’t mind. Given the way I crashed through Intro to Real Analysis I was glad for a subject that was, relatively, a breeze.

As we worked through Complex Variables and Applications so many things, so very many things, got to be easy. The basic unit of complex analysis, at least as we young majors learned it, was in contour integrals. These are integrals whose value depends on the values of a function on a closed loop. The loop is in the complex plane. The complex plane is, well, your ordinary plane. But we say the x-coordinate and the y-coordinate are parts of the same complex-valued number. The x-coordinate is the real-valued part. The y-coordinate is the imaginary-valued part. And we call that summation ‘z’. In complex-valued functions ‘z’ serves the role that ‘x’ does in normal mathematics.

So a closed loop is exactly what you think. Take a rubber band and twist it up and drop it on the table. That’s a closed loop. Suppose you want to integrate a function, ‘f(z)’. If you can always take its derivative on this loop and on the interior of that loop, then its contour integral is … zero. No matter what the function is. As long as it’s “analytic”, as the terminology has it. Yeah, we were all stunned into silence too. (Granted, mathematics classes are usually quiet, since it’s hard to get a good discussion going. Plus many of us were in post-dinner digestive lulls.)

Integrating regular old functions of real-valued numbers is this tedious process. There’s sooooo many rules and possibilities and special cases to consider. There’s sooooo many tricks that get you the integrals of some functions. And then here, with complex-valued integrals for analytic functions, you know the answer before you even look at the function.

As you might imagine, since this is only page 113 of a 341-page book there’s more to it. Most functions that anyone cares about aren’t analytic. At least they’re not analytic everywhere inside regions that might be interesting. There’s usually some points where an interesting function ‘f(z)’ is undefined. We call these “singularities”. Yes, like starships are always running into. Only we rarely get propelled into other universes or other times or turned into ghosts or stuff like that.

So much of the rest of the course turns into ways to avoid singularities. Sometimes you can spackel them over. This is when the function happens not to be defined somewhere, but you can see what it ought to be. Sometimes you have to do something more. This turns into a search for “removable” singularities. And this does something so brilliant it looks illicit. You modify your closed loop, so that it comes up very close, as close as possible, to the singularity, but studiously avoids it. Follow this game of I’m-not-touching-you right and you can turn your integral into two parts. One is the part that’s equal to zero. The other is the part that’s a constant times whatever the function is at the singularity you’re removing. And that ought to be easy to find the value for. (Being able to find a function’s value doesn’t mean you can find its derivative.)

Those tricks were hard to master. Not because they were hard. Because they were easy, in a context where we expected hard. But after that we got into how to move singularities. That is, how to do a change of variables that moved the singularities to where they’re more convenient for some reason. How could this be more convenient? Because of chapter five, series. In regular old calculus we learn how to approximate well-behaved functions with polynomials. In complex-variable calculus, we learn the same thing all over again. They’re polynomials of complex-valued variables, but it’s the same sort of thing. And not just polynomials, but things that look like polynomials except they’re powers of \frac{1}{z} instead. These open up new ways to approximate functions, and to remove singularities from functions.

And then we get into transformations. These are about turning a problem that’s hard into one that’s easy. Or at least different. They’re a change of variable, yes. But they also change what exactly the function is. This reshuffles the problem. Makes for a change in singularities. Could make ones that are easier to work with.

One of the useful, and so common, transforms is called the Laplace-Stieltjes Transform. (“Laplace” is said like you might guess. “Stieltjes” is said, or at least we were taught to say it, like “Stilton cheese” without the “ton”.) And it tends to create functions that look like a series, the sum of a bunch of terms. Infinitely many terms. Each of those terms looks like a number times another number raised to some constant times ‘z’. As the course came to its conclusion, we were all prepared to think about these infinite series. Where singularities might be. Which of them might be removable.

These functions, these results of the Laplace-Stieltjes Transform, we collectively call ‘zeta functions’. There are infinitely many of them. Some of them are relatively tame. Some of them are exotic. One of them is world-famous. Professor Walsh — I don’t mean to name-drop, but I discovered the syllabus for the course tucked in the back of my textbook and I’m delighted to rediscover it — talked about it.

That world-famous one is, of course, the Riemann Zeta function. Yes, that same Riemann who keeps turning up, over and over again. It looks simple enough. Almost tame. Take the counting numbers, 1, 2, 3, and so on. Take your ‘z’. Raise each of the counting numbers to that ‘z’. Take the reciprocals of all those numbers. Add them up. What do you get?

A mass of fascinating results, for one. Functions you wouldn’t expect are concealed in there. There’s strips where the real part is zero. There’s strips where the imaginary part is zero. There’s points where both the real and imaginary parts are zero. We know infinitely many of them. If ‘z’ is -2, for example, the sum is zero. Also if ‘z’ is -4. -6. -8. And so on. These are easy to show, and so are dubbed ‘trivial’ zeroes. To say some are ‘trivial’ is to say that there are others that are not trivial. Where are they?

Professor Walsh explained. We know of many of them. The nontrivial zeroes we know of all share something in common. They have a real part that’s equal to 1/2. There’s a zero that’s at about the number \frac{1}{2} - \imath 14.13 . Also at \frac{1}{2} + \imath 14.13 . There’s one at about \frac{1}{2} - \imath 21.02 . Also about \frac{1}{2} + \imath 21.02 . (There’s a symmetry, you maybe guessed.) Every nontrivial zero we’ve found has a real component that’s got the same real-valued part. But we don’t know that they all do. Nobody does. It is the Riemann Hypothesis, the great unsolved problem of mathematics. Much more important than that Fermat’s Last Theorem, which back then was still merely a conjecture.

What a prospect! What a promise! What a way to set us up for the final exam in a couple of weeks.

I had an inspiration, a kind of scheme of showing that a nontrivial zero couldn’t be within a given circular contour. Make the size of this circle grow. Move its center farther away from the z-coordinate \frac{1}{2} + \imath 0 to match. Show there’s still no nontrivial zeroes inside. And therefore, logically, since I would have shown nontrivial zeroes couldn’t be anywhere but on this special line, and we know nontrivial zeroes exist … I leapt enthusiastically into this project. A little less enthusiastically the next day. Less so the day after. And on. After maybe a week I went a day without working on it. But came back, now and then, prodding at my brilliant would-be proof.

The Riemann Zeta function was not on the final exam, which I’ve discovered was also tucked into the back of my textbook. It asked more things like finding all the singular points and classifying what kinds of singularities they were for functions like e^{-\frac{1}{z}} instead. If the syllabus is accurate, we got as far as page 218. And I’m surprised to see the professor put his e-mail address on the syllabus. It was merely “bwalsh@math”, but understand, the Internet was a smaller place back then.

I finished the course with an A-, but without answering any of the great unsolved problems of mathematics.

The End 2016 Mathematics A To Z: General Covariance


Today’s term is another request, and another of those that tests my ability to make something understandable. I’ll try anyway. The request comes from Elke Stangl, whose “Research Notes on Energy, Software, Life, the Universe, and Everything” blog I first ran across years ago, when she was explaining some dynamical systems work.

General Covariance

So, tensors. They’re the things mathematicians get into when they figure vectors just aren’t hard enough. Physics majors learn about them too. Electrical engineers really get into them. Some material science types too.

You maybe notice something about those last three groups. They’re interested in subjects that are about space. Like, just, regions of the universe. Material scientists wonder how pressure exerted on something will get transmitted. The structure of what’s in the space matters here. Electrical engineers wonder how electric and magnetic fields send energy in different directions. And physicists — well, everybody who’s ever read a pop science treatment of general relativity knows. There’s something about the shape of space something something gravity something equivalent acceleration.

So this gets us to tensors. Tensors are this mathematical structure. They’re about how stuff that starts in one direction gets transmitted into other directions. You can see how that’s got to have something to do with transmitting pressure through objects. It’s probably not too much work to figure how that’s relevant to energy moving through space. That it has something to do with space as just volume is harder to imagine. But physics types have talked about it quite casually for over a century now. Science fiction writers have been enthusiastic about it almost that long. So it’s kind of like the Roman Empire. It’s an idea we hear about early and often enough we’re never really introduced to it. It’s never a big new idea we’re presented, the way, like, you get specifically told there was (say) a War of 1812. We just soak up a couple bits we overhear about the idea and carry on as best our lives allow.

But to think of space. Start from somewhere. Imagine moving a little bit in one direction. How far have you moved? If you started out in this one direction, did you somehow end up in a different one? Now imagine moving in a different direction. Now how far are you from where you started? How far is your direction from where you might have imagined you’d be? Our intuition is built around a Euclidean space, or one close enough to Euclidean. These directions and distances and combined movements work as they would on a sheet of paper, or in our living room. But there is a difference. Walk a kilometer due east and then one due north and you will not be in exactly the same spot as if you had walked a kilometer due north and then one due east. Tensors are efficient ways to describe those little differences. And they tell us something of the shape of the Earth from knowing these differences. And they do it using much of the form that matrices and vectors do, so they’re not so hard to learn as they might be.

That’s all prelude. Here’s the next piece. We go looking at transformations. We take a perfectly good coordinate system and a point in it. Now let the light of the full Moon shine upon it, so that it shifts to being a coordinate werewolf. Look around you. There’s a tensor that describes how your coordinates look here. What is it?

You might wonder why we care about transformations. What was wrong with the coordinates we started with? But that’s because mathematicians have lumped a lot of stuff into the same name of “transformation”. A transformation might be something as dull as “sliding things over a little bit”. Or “turning things a bit”. It might be “letting a second of time pass”. Or “following the flow of whatever’s moving”. Stuff we’d like to know for physics work.

“General covariance” is a term that comes up when thinking about transformations. Suppose we have a description of some physics problem. By this mostly we mean “something moving in space” or “a bit of light moving in space”. That’s because they’re good building blocks. A lot of what we might want to know can be understood as some mix of those two problems.

Put your description through the same transformation your coordinate system had. This will (most of the time) change the details of how your problem’s represented. But does it change the overall description? Is our old description no longer even meaningful?

I trust at this point you’ve nodded and thought something like “well, that makes sense”. Give it another thought. How could we not have a “generally covariant” description of something? Coordinate systems are our impositions on a problem. We create them to make our lives easier. They’re real things in exactly the same way that lines of longitude and latitude are real. If we increased the number describing the longitude of every point in the world by 14, we wouldn’t change anything real about where stuff was or how to navigate to it. We couldn’t.

Here I admit I’m stumped. I can’t think of a good example of a system that would look good but not be generally covariant. I’m forced to resort to metaphors and analogies that make this essay particularly unsuitable to use for your thesis defense.

So here’s the thing. Longitude is a completely arbitrary thing. Measuring where you are east or west of some prime meridian might be universal, or easy for anyone to tumble onto. But the prime meridian is a cultural choice. It’s changed before. It may change again. Indeed, Geographic Information Services people still work with many different prime meridians. Most of them are for specialized purposes. Stuff like mapping New Jersey in feet north and east of some reference, for which Greenwich would make the numbers too ugly. But if our planet is mapped in an alien’s records, that map has at its center some line almost surely not Greenwich.

But latitude? Latitude is, at least, less arbitrary. That we measure it from zero to ninety degrees, north or south, is a cultural choice. (Or from -90 to 90 degrees. Same thing.) But that there’s a north pole and a south pole? That’s true as long as the planet is rotating. And that’s forced on us. If we tried to describe the Earth as rotating on an axis between Paris and Mexico City, we would … be fighting an uphill struggle, at least. It’s hard to see any problem that might make easier, apart from getting between Paris and Mexico City.

In models of the laws of physics we don’t really care about the north or south pole. A planet might have them or might not. But it has got some privileged stuff that just has to be so. We can’t have stuff that makes the speed of light in a vacuum change. And we have to make sense of a block of space that hasn’t got anything in it, no matter, no light, no energy, no gravity. I think those are the important pieces actually. But I’ll defer, growling angrily, to an expert in general relativity or non-Euclidean coordinates if I’ve misunderstood.

It’s often put that “general covariance” is one of the requirements for a scheme to describe General Relativity. I shall risk sounding like I’m making a joke and say that depends on your perspective. One can use different philosophical bases for describing General Relativity. In some of them you can see general covariance as a result rather than use it as a basic assumption. Here’s a 1993 paper by Dr John D Norton that describes some of the different ways to understand the point of general covariance.

By the way the term “general covariance” comes from two pieces. The “covariance” is because it describes how changes in one coordinate system are reflected in another. It’s “general” because we talk about coordinate transformations without knowing much about them. That is, we’re talking about transformations in general, instead of some specific case that’s easy to work with. This is why the mathematics of this can be frightfully tricky; we don’t know much about the transformations we’re working with. For a parallel, it’s easy to tell someone how to divide 14 into 112. It’s harder to tell them how to divide absolutely any number into absolutely any other number.

Quite a bit of mathematical physics plays into geometry. Gravity physicists mostly see as a problem of geometry. People who like reading up on science take that as given too. But many problems can be understood as a point or a blob of points in some kind of space, and how that point moves or that blob evolves in time. We don’t see “general covariance” in these other fields exactly. But we do see things that resemble it. It’s an idea with considerable reach.


I’m not sure how I feel about this. For most of my essays I’ve kept away from equations, even for the Why Stuff Can Orbit sequence. But this is one of those subjects it’s hard to be exact about without equations. I might revisit this in a special all-symbols, calculus-included, edition. Depends what my schedule looks like.

Reading the Comics, August 12, 2016: Skipping Saturday Edition


I have no idea how many or how few comic strips on Saturday included some mathematical content. I was away most of the day. We made a quick trip to the Michigan’s Adventure amusement park and then to play pinball in a kind-of competitive league. The park turned out to have every person in the world there. If I didn’t wave to you from the queue on Shivering Timbers I apologize but it hasn’t got the greatest lines of sight. The pinball stuff took longer than I expected too and, long story short, we got back home about 4:15 am. So I’m behind on my comics and here’s what I did get to.

Tak Bui’s PC and Pixel for the 8th depicts the classic horror of the cleaning people wiping away an enormous amount of hard work. It’s a primal fear among mathematicians at least. Boards with a space blocked off with the “DO NOT ERASE” warning are common. At this point, though, at least, the work is probably savable. You can almost always reconstruct work, and a few smeared lines like this are not bad at all.

The work appears to be quantum mechanics work. The tell is in the upper right corner. There’s a line defining E (energy) as equal to something including \imath \hbar \frac{\partial}{\partial t}\phi(r, t) . This appears in the time-dependent Schrödinger Equation. It describes how probability waveforms look when the potential energies involved may change in time. These equations are interesting and impossible to solve exactly. We have to resort to approximations, including numerical approximations, all the time. So that’s why the computer lab would be working on this.

Mark Anderson’s Andertoons! Where would I be without them? Besides short on content. The strip for the 10th depicts a pollster saying to “put the margin of error at 50%”, guaranteeing the results are right. If you follow elections polls you do see the results come with a margin of error, usually of about three percent. But every sampling technique carries with it a margin of error. The point of a sample is to learn something about the whole without testing everything in it, after all. And probability describes how likely it is the quantity measured by a sample will be far from the quantity the whole would have. The logic behind this is independent of the thing being sampled. It depends on what the whole is like. It depends on how the sampling is done. It doesn’t matter whether you’re sampling voter preferences or whether there are the right number of peanuts in a bag of squirrel food.

So a sample’s measurement will almost never be exactly the same as the whole population’s. That’s just requesting too much of luck. The margin of error represents how far it is likely we’re off. If we’ve sampled the voting population fairly — the hardest part — then it’s quite reasonable the actual vote tally would be, say, one percent different from our poll. It’s implausible that the actual votes would be ninety percent different. The margin of error is roughly the biggest plausible difference we would expect to see.

Except. Sometimes we do, even with the best sampling methods possible, get a freak case. Rarely noticed beside the margin of error is the confidence level. This is what the probability is that the actual population value is within the sampling error of the sample’s value. We don’t pay much attention to this because we don’t do statistical-sampling on a daily basis. The most normal people do is read election polling results. And most election polls settle for a confidence level of about 95 percent. That is, 95 percent of the time the actual voting preference will be within the three or so percentage points of the survey. The 95 percent confidence level is popular maybe because it feels like a nice round number. It’ll be off only about one time out of twenty. It also makes a nice balance between a margin of error that doesn’t seem too large and that doesn’t need too many people to be surveyed. As often with statistics the common standard is an imperfectly-logical blend of good work and ease of use.

For the 11th Mark Anderson gives me less to talk about, but a cute bit of wordplay. I’ll take it.

Anthony Blades’s Bewley for the 12th is a rerun. It’s at least the third time this strip has turned up since I started writing these Reading The Comics posts. For the record it ran also the 27th of April, 2015 and on the 24th of May, 2013. It also suggests mathematicians have a particular tell. Try this out next time you do word problem poker and let me know how it works for you.

Julie Larson’s The Dinette Set for the 12th I would have sworn I’d seen here before. I don’t find it in my archives, though. We are meant to just giggle at Larson’s characters who bring their penny-wise pound-foolishness to everything. But there is a decent practical mathematics problem here. (This is why I thought it had run here before.) How far is it worth going out of one’s way for cheaper gas? How much cheaper? It’s simple algebra and I’d bet many simple Javascript calculator tools. The comic strip originally ran the 4th of October, 2005. Possibly it’s been rerun since.

Bill Amend’s FoxTrot Classics for the 12th is a bunch of gags about a mathematics fighting game. I think Amend might be on to something here. I assume mathematics-education contest games have evolved from what I went to elementary school on. That was a Commodore PET with a game where every time you got a multiplication problem right your rocket got closer to the ASCII Moon. But the game would probably quickly turn into people figuring how to multiply the other person’s function by zero. I know a game exploit when I see it.

The most obscure reference is in the third panel one. Jason speaks of “a z = 0 transform”. This would seem to be some kind of z-transform, a thing from digital signals processing. You can represent the amplification, or noise-removal, or averaging, or other processing of a string of digits as a polynomial. Of course you can. Everything is polynomials. (OK, sometimes you must use something that looks like a polynomial but includes stuff like the variable z raised to a negative power. Don’t let that throw you. You treat it like a polynomial still.) So I get what Jason is going for here; he’s processing Peter’s function down to zero.

That said, let me warn you that I don’t do digital signal processing. I just taught a course in it. (It’s a great way to learn a subject.) But I don’t think a “z = 0 transform” is anything. Maybe Amend encountered it as an instructor’s or friend’s idiosyncratic usage. (Amend was a physics student in college, and shows his comfort with mathematics-major talk often. He by the way isn’t even the only syndicated cartoonist with a physics degree. Bud Grace of The Piranha Club was also a physics major.) I suppose he figured “z = 0 transform” would read clearly to the non-mathematician and be interpretable to the mathematician. He’s right about that.

A Leap Day 2016 Mathematics A To Z: Jacobian


I don’t believe I got any requests for a mathematics term starting ‘J’. I’m as surprised as you. Well, maybe less surprised. I’ve looked at the alphabetical index for Wolfram MathWorld and noticed its relative poverty for ‘J’. It’s not as bad as ‘X’ or ‘Y’, though. But it gives me room to pick a word of my own.

Jacobian.

The Jacobian is named for Carl Gustav Jacob Jacobi, who lived in the first half of the 19th century. He’s renowned for work in mechanics, the study of mathematically modeling physics. He’s also renowned for matrices, rectangular grids of numbers which represent problems. There’s more, of course, but those are the points that bring me to the Jacobian I mean to talk about. There are other things named for Jacobi, including other things named “Jacobian”. But I mean to limit the focus to two, related, things.

I discussed mappings some while describing homomorphisms and isomorphisms. A mapping’s a relationship matching things in one set, a domain, to things in a set, the range. The domain and the range can be anything at all. They can even be the same thing, if you like.

A very common domain is … space. Like, the thing you move around in. It’s a region full of points that are all some distance and some direction from one another. There’s almost always assumed to be multiple directions possible. We often call this “Euclidean space”. It’s the space that works like we expect for normal geometry. We might start with a two- or three-dimensional space. But it’s often convenient, especially for physics problems, to work with more dimensions. Four-dimensions. Six-dimensions. Incredibly huge numbers of dimensions. Honest, this often helps. It’s just harder to sketch out.

So we might for a problem need, say, 12-dimensional space. We can describe a point in that with an ordered set of twelve coordinates. Each describes how far you are from some standard reference point known as The Origin. If it doesn’t matter how many dimensions we’re working with, we call it an N-dimensional space. Or we use another letter if N is committed to something or other.

This is our stage. We are going to be interested in some N-dimensional Euclidean space. Let’s pretend N is 2; then our stage looks like the screen you’re reading now. We don’t need to pretend N is larger yet.

Our player is a mapping. It matches things in our N-dimensional space back to the same N-dimensional space. For example, maybe we have a mapping that takes the point with coordinates (3, 1) to the point (-3, -1). And it takes the point with coordinates (5.5, -2) to the point (-5.5, 2). And it takes the point with coordinates (-6, -π) to the point (6, π). You get the pattern. If we start from the point with coordinates (x, y) for some real numbers x and y, then the mapping gives us the point with coordinates (-x, -y).

One more step and then the play begins. Let’s not just think about a single point. Think about a whole region. If we look at the mapping of every point in that whole region, we get out … probably, some new region. We call this the “image” of the original region. With the mapping from the paragraph above, it’s easy to say what the image of a region is. It’ll look like the reflection in a corner mirror of the original region.

What if the mapping’s more complicated? What if we had a mapping that described how something was reflected in a cylindrical mirror? Or a mapping that describes how the points would move if they represent points of water flowing around a drain? — And that last explains why Jacobians appear in mathematical physics.

Many physics problems can be understood as describing how points that describe the system move in time. The dynamics of a system can be understood by how moving in time changes a region of starting conditions. A system might keep a region pretty much unchanged. Maybe it makes the region move, but it doesn’t change size or shape much. Or a system might change the region impressively. It might keep the area about the same, but stretch it out and fold it back, the way one might knead cookie dough.

The Jacobian, the one I’m interested in here, is a way of measuring these changes. The Jacobian matrix describes, for each point in the original domain, how a tiny change in one coordinate causes a change in the mapping’s coordinates. So if we have a mapping from an N-dimensional space to an N-dimensional space, there are going to be N times N values at work. Each one represents a different piece. How much does a tiny change in the first coordinate of the original point change the first coordinate of the mapping of the point? How much does a tiny change in the first coordinate of the original point change the second coordinate of the mapping of the the point? How much does a tiny change in the first coordinate of the original point change the third coordinate of the mapping of the point? … how much does a tiny change in the second coordinate of the original point change the first coordinate of the mapping of the point? And on and on and now you know why mathematics majors are trained on Jacobians with two-by-two and three-by-three matrices. We do maybe a couple four-by-four matrices to remind us that we are born to suffer. We never actually work out bigger matrices. Life is just too short.

(I’ve been talking, by the way, about the mapping of an N-dimensional space to an N-dimensional space. This is because we’re about to get to something that requires it. But we can write a matrix like this for a mapping of an N-dimensional space to an M-dimensional space, a different-sized space. It has uses. Let’s not worry about that.)

If you have a square matrix, one that has as many rows as columns, then you can calculate something named the determinant. This involves a lot of work. It takes even more work the bigger the matrix is. This is why mathematics majors learn to calculate determinants on two-by-two and three-by-three matrices. We do a couple four-by-four matrices and maybe one five-by-five to again remind us about suffering.

Anyway, by calculating the determinant of a Jacobian matrix, we get the Jacobian determinant. Finally we have something simple. The Jacobian determinant says how the area of a region changes in the mapping. Suppose the Jacobian determinant at a point is 2. Then a small region containing that point has an image with twice the original area. Suppose the Jacobian determinant is 0.8. Then a small region containing that point has an image with area 0.8 times the original area. Suppose the Jacobian determinant is -1. Then —

Well, what would you imagine?

If the Jacobian determinant is -1, then a small region around that point gets mapped to something with the same area. What changes is called the handedness. The mapping doesn’t just stretch or squash the region, but it also flips it along at least one dimension. The Jacobian determinant can tell us that.

So the Jacobian matrix, and the Jacobian determinant, are ways to describe how mappings change areas. Mathematicians will often call either of them just “the Jacobian”. We trust context to make clear what we mean. Either one is a way of describing how mappings change space: how they expand or contract, how they rotate, how they reflect spaces. Some fields of mathematics, including a surprising amount of the study of physics, are about studying how space changes.