The Fundamentals of Mathematics


A toot on Mathstodon made me aware of this. It’s a listing, and brief description, of 243 theorems, as compiled by Oliver Knill. As the title implies they’re all intended to be fundamental theorems of some area of mathematics.

Many areas of mathematics have something called their Fundamental Theorem. The one that comes first to my mind is always the Fundamental Theorem of Calculus. That one connects derivatives and indefinite integrals in a way that saves a lot of work. But also commonly in my mind are the Fundamental Theorem of Algebra, which assures one of how many roots a polynomial should have, and the Fundamental Theorem of Arithmetic, about factoring counting numbers into primes.

The list does not stop there. And it gets into areas where “Fundamental Theorem Of ___ ” is not the common phrasing. They are, where I know something about the area, certainly core, fundamental theorems as promised, though. Or important mathematical principles, such as the pigeon-hole principle. It’s worth skimming around; even if you don’t know anything about the area, Knill provides some context, so you can understand why this might be of interest.

And then after the many theorems Knill provides some thoughts about why these theorems. What makes a theorem “fundamental”. This is something which shows off how culturally dependent and human the construction of mathematics is. And then, from page 147, a set of short lecture notes about the history of mathematics. Even if your eyes glaze over at torsion groups, it’s worth going into those notes at the end.

What Is The Logarithm of a Negative Number?


Learning of imaginary numbers, things created to be the square roots of negative numbers, inspired me. It probably inspires anyone who’s the sort of person who’d become a mathematician. The trick was great. I wondered could I do it? Could I find some other useful expansion of the number system?

The square root of a complex-valued number sounded like the obvious way to go, until a little later that week when I learned that’s just some other complex-valued numbers. The next thing I hit on: how about the logarithm of a negative number? Couldn’t that be a useful expansion of numbers?

No. It turns out you can make a sensible logarithm of negative, and complex-valued, numbers using complex-valued numbers. Same with trigonometric and inverse trig functions, tangents and arccosines and all that. There isn’t anything we can do with the normal mathematical operations that needs something bigger than the complex-valued numbers to play with. It’s possible to expand on the complex-valued numbers. We can make quaternions and some more elaborate constructs there. They don’t solve any particular shortcoming in complex-valued numbers, but they’ve got their uses. I never got anywhere near reinventing them. I don’t regret the time spent on that. There’s something useful in trying to invent something even if it fails.

One problem with mathematics — with all intellectual fields, really — is that it’s easy, when teaching, to give the impression that this stuff is the Word of God, built into the nature of the universe and inarguable. It’s so not. The stuff we find interesting and how we describe those things are the results of human thought, attempts to say what is interesting about a thing and what is useful. And what best approximates our ideas of what we would like to know. So I was happy to see this come across my Twitter feed:

The links to a 12-page paper by Deepak Bal, Leibniz, Bernoulli, and the Logarithms of Negative Numbers. It’s a review of how the idea of a logarithm of a negative number got developed over the course of the 18th century. And what great minds, like Gottfried Leibniz and John (I) Bernoulli argued about as they find problems with the implications of what they were doing. (There were a lot of Bernoullis doing great mathematics, and even multiple John Bernoullis. The (I) is among the ways we keep them sorted out.) It’s worth a read, I think, even if you’re not all that versed in how to calculate logarithms. (but if you’d like to be better-versed, here’s the tail end of some thoughts about that.) The process of how a good idea like this comes to be is worth knowing.

Also: it turns out there’s not “the” logarithm of a complex-valued number. There’s infinitely many logarithms. But they’re a family, all strikingly similar, so we can pick one that’s convenient and just use that. Ask if you’re really interested.

16,000 and a Square


I reached my 16,000th page view, sometime on Thursday. That’s a tiny bit slower than I projected based on May’s readership statistics, but May was a busy month and I’ve had a little less time to write stuff this month, so I’m not feeling bad about that.

Meanwhile, while looking for something else, I ran across a bit about mathematical notation in Florian Cajori’s A History of Mathematical Notation which has left me with a grin since. The book is very good about telling the stories of just what the title suggests. It’s a book well worth dipping into because everything you see written down is the result of a long process of experimentation and fiddling about to find the right balance of “expressing an idea clearly” and “expressing an idea concisely” and “expressing an idea so it’s not too hard to work with”.

The idea here is the square of a variable, which these days we’d normally write as a^2 . According to Cajori (section 304), René Descartes “preferred the notation aa to a^2 .” Cajori notes that Carl Gauss had this same preference and defended it on the grounds that doubling the symbol didn’t take any more (or less) space than the superscript 2 did. Cajori lists other great mathematicians who preferred doubling the letter for squaring, including Christiaan Huygens, Edmond Halley, Leonhard Euler, and Isaac Newton. Among mathematicians who preferred a^2 were Blaise Pascal, David Gregory (who was big in infinite series), and Wilhelm Leibniz.

Well of course Newton and Leibniz would be on opposite sides of the aa versus a^2 debate. How could the universe be sensible otherwise?

Where Do Negative Numbers Come From?


Some time ago — and I forget when, I’m embarrassed to say, and can’t seem to find it because the search tool doesn’t work on comments — I was asked about how negative numbers got to be accepted. That’s a great question, particularly since while it seems like the idea of positive numbers is probably lost in prehistory, negative numbers definitely progressed in the past thousand years or so from something people might wildly speculate about to being a reasonably comfortable part of daily mathematics.

While searching for background information I ran across a doctoral thesis, Making Sense Of Negative Numbers, which is uncredited in the PDF I just linked to but appears to be by Dr Cecilia Kilhamn, of the University of Gothenburg, Sweden. Dr Kilhamn’s particular interest (here) is in how people learn to use negative numbers, so most of the thesis is about the conceptual difficulties people have when facing the minus sign (not least because it serves two roles, of marking a number as negative and of marking the subtraction operation), but the first chapters describe the historical process of developing the concept of negative numbers.

Continue reading “Where Do Negative Numbers Come From?”

%d bloggers like this: