I’m slow about sharing them is all. It’s a simple dynamic: I want to write enough about each tweet that it’s interesting to share, and then once a little time has passed, I need to do something more impressive to be worth the wait. Eventually, nothing is ever shared. Let me try to fix that.
Just as it says: a link to Leonhard Euler’s Elements of Algebra, as rendered by Google Books. Euler you’ll remember from every field of mathematics ever. This 1770 textbook is one of the earliest that presents algebra that looks like, you know, algebra, the way we study it today. Much of that is because this book presented algebra so well that everyone wanted to imitate it.
This Theorem of the Day from back in November already is one about elliptic functions. Those came up several times in the Summer 2017 Mathematics A To Z. This day about the Goins-Maddox-Rusin Theorem on Heron Triangles, is dense reading even by the standards of the Theorem of the Day tweet (which fits each day’s theorem into a single slide). Still, it’s worth lounging about in the mathematics.
Elke Stangl, writing about one of those endlessly-to-me interesting subjects: phase space. This is a particular way of representing complicated physical systems. Set it up right and all sorts of physics problems become, if not easy, at least things there’s a standard set of tools for. Thermodynamics really encourages learning about such phase spaces, and about entropy, and here she writes about some of this.
Non-limit calculating e by hand. https://t.co/Kv80RotboJ Fun activity & easily reproducible. Anyone know the author?
So ‘e’ is an interesting number. At least, it’s a number that’s got a lot of interesting things built around it. Here, John Golden points out a neat, fun, and inefficient way to find the value of ‘e’. It’s kin to that scheme for calculating π inefficiently that I was being all curmudgeonly about a couple of Pi Days ago.
Jo Morgan comes to the rescue of everyone who tries to read old-time mathematics. There were a lot of great and surprisingly readable great minds publishing in the 19th century, but then you get partway through a paragraph and it might as well be Old High Martian with talk about diminishings and consequents and so on. So here’s some help.
For college students that will be taking partial differential equations next semester, here is a very good online book. https://t.co/txtfbMaRKc
As it says on the tin: a textbook on partial differential equations. If you find yourself adrift in the subject, maybe seeing how another author addresses the same subject will help, if nothing else for finding something familiar written in a different fashion.
Here's a cool way to paper-fold an ellipse:
1) Cut a circle and fold it so that the circumference falls on a fixed point inside 2) Repeat this procedure using random folds pic.twitter.com/TAU50pvgll
And this is just fun: creating an ellipse as the locus of points that are never on the fold line when a circle’s folded by a particular rule.
Finally, something whose tweet origin I lost. It was from one of the surprisingly many economists I follow considering I don’t do financial mathematics. But it links to a bit of economic history: Origins of the Sicilian Mafia: The Market for Lemons. It’s 31 pages plus references. And more charts about wheat production in 19th century Sicily than I would have previously expected to see.
By the way, if you’re interested in me on Twitter, that would be @Nebusj. Thanks for stopping in, should you choose to.
I’m not ready to finish the series off yet. But I am getting closer to wrapping up perturbed orbits. So I want to say something about what I’m looking for.
In some ways I’m done already. I showed how to set up a central force problem, where some mass gets pulled towards the center of the universe. It can be pulled by a force that follows any rule you like. The rule has to follow some rules. The strength of the pull changes with how far the mass is from the center. It can’t depend on what angle the mass makes with respect to some reference meridian. Once we know how much angular momentum the mass has we can find whether it can have a circular orbit. And we can work out whether that orbit is stable. If the orbit is stable, then for a small nudge, the mass wobbles around that equilibrium circle. It spends some time closer to the center of the universe and some time farther away from it.
I want something a little more, else I can’t carry on this series. I mean, we can make central force problems with more things in them. What we have now is a two-body problem. A three-body problem is more interesting. It’s pretty near impossible to give exact, generally true answers about. We can save things by only looking at very specific cases. Fortunately one is a sun, planet, and moon, where each object is much more massive than the next one. We see a lot of things like that. Four bodies is even more impossible. Things start to clear up if we look at, like, a million bodies, because our idea of what “clear” is changes. I don’t want to do that right now.
Instead I’m going to look for closed orbits. Closed orbits are what normal people would call “orbits”. We’re used to thinking of orbits as, like, satellites going around and around the Earth. We know those go in circles, or ellipses, over and over again. They don’t, but the difference between a closed orbit and what they do is small enough we don’t need to care.
Here, “orbit” means something very close to but not exactly what normal people mean by orbits. Maybe I should have said something about that before. But the difference hasn’t counted for much before.
Start off by thinking of what we need to completely describe what a particular mass is doing. You need to know the central force law that the mass obeys. You need to know, for some reference time, where it is. You also need to know, for that same reference time, what its momentum is. Once you have that, you can predict where it should go for all time to come. You can also work out where it must have been before that reference time. (This we call “retrodicting”. Or “predicting the past”. With this kind of physics problem time has an unnerving symmetry. The tools which forecast what the mass will do in the future are exactly the same as those which tell us what the mass has done in the past.)
Now imagine knowing all the sets of positions and momentums that the mass has had. Don’t look just at the reference time. Look at all the time before the reference time, and look at all the time after the reference time. Imagine highlighting all the sets of positions and momentums the mass ever took on or ever takes on. We highlight them against the universe of all the positions and momentums that the mass could have had if this were a different problem.
What we get is this ribbon-y thread that passes through the universe of every possible setup. This universe of every possible setup we call a “phase space”. It’s easy to explain the “space” part of that name. The phase space obeys the rules we’d expect from a vector space. It also acts in a lot of ways like the regular old space that we live in. The “phase” part I’m less sure how to justify. I suspect we get it because this way of looking at physics problems comes from statistical mechanics. And in that field we’re looking, often, at the different ways a system can behave. This mathematics looks a lot like that of different phases of matter. The changes between solids and liquids and gases are some of what we developed this kind of mathematics to understand, in fact. But this is speculation on my part. I’m not sure why “phase” has attached to this name. I can think of other, harder-to-popularize reasons why the name would make sense too. Maybe it’s the convergence of several reasons. I’d love to hear if someone has a good etymology. If one exists; remember that we still haven’t got the story straight about why ‘m’ stands for the slope of a line.
Anyway, this ribbon of all the arrangements of position and momentum that the mass does ever at any point have we call a “trajectory”. We call it a trajectory because it looks like a trajectory. Sometimes mathematics terms aren’t so complicated. We also call it an “orbit” since very often the problems we like involve trajectories that loop around some interesting area. It looks like a planet orbiting a sun.
A “closed orbit” is an orbit that gets back to where it started. This means you can take some reference time, and wait. Eventually the mass comes back to the same position and the same momentum that you saw at that reference time. This might seem unavoidable. Wouldn’t it have to get back there? And it turns out, no, it doesn’t. A trajectory might wander all over phase space. This doesn’t take much imagination. But even if it doesn’t, if it stays within a bounded region, it could still wander forever without repeating itself. If you’re not sure about that, please consider an old sequence I wrote inspired by the Aardman Animation film Arthur Christmas. Also please consider seeing the Aardman Animation film Arthur Christmas. It is one of the best things this decade has offered us. The short version is, though, that there is a lot of room even in the smallest bit of space. A trajectory is, in a way, a one-dimensional thing that might get all coiled up. But phase space has got plenty of room for that.
And sometimes we will get a closed orbit. The mass can wander around the center of the universe and come back to wherever we first noticed it with the same momentum it first had. A that point it’s locked into doing that same thing again, forever. If it could ever break out of the closed orbit it would have had to the first time around, after all.
Closed orbits, I admit, don’t exist in the real world. Well, the real world is complicated. It has more than a single mass and a single force at work. Energy and momentum are conserved. But we effectively lose both to friction. We call the shortage “entropy”. Never mind. No person has ever seen a circle, and no person ever will. They are still useful things to study. So it is with closed orbits.
An equilibrium orbit, the circular orbit of a mass that’s at exactly the right radius for its angular momentum, is closed. A perturbed orbit, wobbling around the equilibrium, might be closed. It might not. I mean next time to discuss what has to be true to close an orbit.
This request echoes one of the first terms from my Summer 2015 Mathematics A To Z. Then I’d spent some time on a bijection, or a bijective map. A surjective map is a less complicated concept. But if you understood bijective maps, you picked up surjective maps along the way.
By “map”, in this context, mathematicians don’t mean those diagrams that tell you where things are and how you might get there. Of course we don’t. By a “map” we mean that we have some rule that matches things in one set to things in another. If this sounds to you like what I’ve claimed a function is then you have a good ear. A mapping and a function are pretty much different names for one another. If there’s a difference in connotation I suppose it’s that a “mapping” makes a weaker suggestion that we’re necessarily talking about numbers.
(In some areas of mathematics, a mapping means a function with some extra properties, often some kind of continuity. Don’t worry about that. Someone will tell you when you’re doing mathematics deep enough to need this care. Mind, that person will tell you by way of a snarky follow-up comment picking on some minor point. It’s nothing personal. They just want you to appreciate that they’re very smart.)
So a function, or a mapping, has three parts. One is a set called the domain. One is a set called the range. And then there’s a rule matching things in the domain to things in the range. With functions we’re so used to the domain and range being the real numbers that we often forget to mention those parts. We go on thinking “the function” is just “the rule”. But the function is all three of these pieces.
A function has to match everything in the domain to something in the range. That’s by definition. There’s no unused scraps in the domain. If it looks like there is, that’s because were being sloppy in defining the domain. Or let’s be charitable. We assumed the reader understands the domain is only the set of things that make sense. And things make sense by being matched to something in the range.
Ah, but now, the range. The range could have unused bits in it. There’s nothing that inherently limits the range to “things matched by the rule to some thing in the domain”.
By now, then, you’ve probably spotted there have to be two kinds of functions. There’s one in which the whole range is used, and there’s ones in which it’s not. Good eye. This is exactly so.
If a function only uses part of the range, if it leaves out anything, even if it’s just a single value out of infinitely many, then the function is called an “into” mapping. If you like, it takes the domain and stuffs it into the range without filling the range.
Ah, but if a function uses every scrap of the range, with nothing left out, then we have an “onto” mapping. The whole of the domain gets sent onto the whole of the range. And this is also known as a “surjective” mapping. We get the term “surjective” from Nicolas Bourbaki. Bourbaki is/was the renowned 20th century mathematics art-collective group which did so much to place rigor and intuition-free bases into mathematics.
The term pairs up with the “injective” mapping. In this, the elements in the range match up with one and only one thing in the domain. So if you know the function’s rule, then if you know a thing in the range, you also know the one and only thing in the domain matched to that. If you don’t feel very French, you might call this sort of function one-to-one. That might be a better name for saying why this kind of function is interesting.
Not every function is injective. But then not every function is surjective either. But if a function is both injective and surjective — if it’s both one-to-one and onto — then we have a bijection. It’s a mapping that can represent the way a system changes and that we know how to undo. That’s pretty comforting stuff.
If we use a mapping to describe how a process changes a system, then knowing it’s a surjective map tells us something about the process. It tells us the process makes the system settle into a subset of all the possible states. That doesn’t mean the thing is stable — that little jolts get worn down. And it doesn’t mean that the thing is settling to a fixed state. But it is a piece of information suggesting that’s possible. This may not seem like a strong conclusion. But considering how little we know about the function it’s impressive to be able to say that much.
We said the function was “onto” if absolutely everything which was in the range got used. That is, if everything in the range has at least one thing in the domain that the rule matches to it. The function that has domain of -3 to 3, and range of -27 to 27, and the rule that matches a number x in the domain to the number x3 in the range is “onto”.
To explain this second term in my mathematical A to Z challenge I have to describe yet another term. That’s function. A non-mathematician’s idea a function is something like “a line with a bunch of x’s in it, and maybe also a cosine or something”. That’s fair enough, although it’s a bit like defining chemistry as “mixing together colored, bubbling liquids until something explodes”.
By a function a mathematician means a rule describing how to pair up things found in one set, called the domain, with the things found in another set, called the range. The domain and the range can be collections of anything. They can be counting numbers, real numbers, letters, shoes, even collections of numbers or sets of shoes. They can be the same kinds of thing. They can be different kinds of thing.