Tagged: history of mathematics Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Friday, 14 April, 2017 Permalink | Reply
    Tags: , , history of mathematics, ,   

    What Is The Logarithm of a Negative Number? 


    Learning of imaginary numbers, things created to be the square roots of negative numbers, inspired me. It probably inspires anyone who’s the sort of person who’d become a mathematician. The trick was great. I wondered could I do it? Could I find some other useful expansion of the number system?

    The square root of a complex-valued number sounded like the obvious way to go, until a little later that week when I learned that’s just some other complex-valued numbers. The next thing I hit on: how about the logarithm of a negative number? Couldn’t that be a useful expansion of numbers?

    No. It turns out you can make a sensible logarithm of negative, and complex-valued, numbers using complex-valued numbers. Same with trigonometric and inverse trig functions, tangents and arccosines and all that. There isn’t anything we can do with the normal mathematical operations that needs something bigger than the complex-valued numbers to play with. It’s possible to expand on the complex-valued numbers. We can make quaternions and some more elaborate constructs there. They don’t solve any particular shortcoming in complex-valued numbers, but they’ve got their uses. I never got anywhere near reinventing them. I don’t regret the time spent on that. There’s something useful in trying to invent something even if it fails.

    One problem with mathematics — with all intellectual fields, really — is that it’s easy, when teaching, to give the impression that this stuff is the Word of God, built into the nature of the universe and inarguable. It’s so not. The stuff we find interesting and how we describe those things are the results of human thought, attempts to say what is interesting about a thing and what is useful. And what best approximates our ideas of what we would like to know. So I was happy to see this come across my Twitter feed:

    The links to a 12-page paper by Deepak Bal, Leibniz, Bernoulli, and the Logarithms of Negative Numbers. It’s a review of how the idea of a logarithm of a negative number got developed over the course of the 18th century. And what great minds, like Gottfried Leibniz and John (I) Bernoulli argued about as they find problems with the implications of what they were doing. (There were a lot of Bernoullis doing great mathematics, and even multiple John Bernoullis. The (I) is among the ways we keep them sorted out.) It’s worth a read, I think, even if you’re not all that versed in how to calculate logarithms. (but if you’d like to be better-versed, here’s the tail end of some thoughts about that.) The process of how a good idea like this comes to be is worth knowing.

    Also: it turns out there’s not “the” logarithm of a complex-valued number. There’s infinitely many logarithms. But they’re a family, all strikingly similar, so we can pick one that’s convenient and just use that. Ask if you’re really interested.

     
  • Joseph Nebus 6:49 pm on Friday, 20 June, 2014 Permalink | Reply
    Tags: , , history of mathematics, , ,   

    16,000 and a Square 


    I reached my 16,000th page view, sometime on Thursday. That’s a tiny bit slower than I projected based on May’s readership statistics, but May was a busy month and I’ve had a little less time to write stuff this month, so I’m not feeling bad about that.

    Meanwhile, while looking for something else, I ran across a bit about mathematical notation in Florian Cajori’s A History of Mathematical Notation which has left me with a grin since. The book is very good about telling the stories of just what the title suggests. It’s a book well worth dipping into because everything you see written down is the result of a long process of experimentation and fiddling about to find the right balance of “expressing an idea clearly” and “expressing an idea concisely” and “expressing an idea so it’s not too hard to work with”.

    The idea here is the square of a variable, which these days we’d normally write as a^2 . According to Cajori (section 304), René Descartes “preferred the notation aa to a^2 .” Cajori notes that Carl Gauss had this same preference and defended it on the grounds that doubling the symbol didn’t take any more (or less) space than the superscript 2 did. Cajori lists other great mathematicians who preferred doubling the letter for squaring, including Christiaan Huygens, Edmond Halley, Leonhard Euler, and Isaac Newton. Among mathematicians who preferred a^2 were Blaise Pascal, David Gregory (who was big in infinite series), and Wilhelm Leibniz.

    Well of course Newton and Leibniz would be on opposite sides of the aa versus a^2 debate. How could the universe be sensible otherwise?

     
  • Joseph Nebus 9:44 pm on Friday, 21 June, 2013 Permalink | Reply
    Tags: , , history of mathematics, , ,   

    Where Do Negative Numbers Come From? 


    Some time ago — and I forget when, I’m embarrassed to say, and can’t seem to find it because the search tool doesn’t work on comments — I was asked about how negative numbers got to be accepted. That’s a great question, particularly since while it seems like the idea of positive numbers is probably lost in prehistory, negative numbers definitely progressed in the past thousand years or so from something people might wildly speculate about to being a reasonably comfortable part of daily mathematics.

    While searching for background information I ran across a doctoral thesis, Making Sense Of Negative Numbers, which is uncredited in the PDF I just linked to but appears to be by Dr Cecilia Kilhamn, of the University of Gothenburg, Sweden. Dr Kilhamn’s particular interest (here) is in how people learn to use negative numbers, so most of the thesis is about the conceptual difficulties people have when facing the minus sign (not least because it serves two roles, of marking a number as negative and of marking the subtraction operation), but the first chapters describe the historical process of developing the concept of negative numbers.

    (More …)

     
    • Peter Mander 6:21 pm on Saturday, 22 June, 2013 Permalink | Reply

      It was me. And thank you for a fascinating post.

      (Reading the Comics, Feb 26)

      Like

      • Joseph Nebus 5:15 am on Friday, 28 June, 2013 Permalink | Reply

        Oh, thank you so. I had a feeling it was one of the Reading the Comics threads but couldn’t pin down which.

        I’m really quite interested in trying to understand different models people had for negative numbers. References I’ve seen to people hypothesizing that negative numbers were larger than positive ones are intriguing, particularly when I think about the statistical mechanics definition of temperature.

        Like

    • Steve Morris 3:21 pm on Wednesday, 5 March, 2014 Permalink | Reply

      I am surprised that there was opposition so recently in history. I wonder when imaginary numbers came to be accepted?

      Like

      • Joseph Nebus 5:04 am on Thursday, 6 March, 2014 Permalink | Reply

        Folklore in the mathematics department is that imaginary numbers still aren’t really accepted at least over in the electrical engineering department.

        (Electrical engineers, the mathematics folks say, use ‘j’ rather than ‘i’ to denote imaginary numbers, which are a convenient way to represent properties of alternating currents. The lore is that this is because electrical engineers won’t put up with ‘imaginary’ numbers because they’re not real, although the notion that this is because ‘i’, the symbol, is already doing heavy enough work representing quantities like current is more compelling to me.)

        Like

    • howardat58 12:55 pm on Tuesday, 9 September, 2014 Permalink | Reply

      In the real world:
      We count, which generates the need for numbers, the natural numbers (including zero).
      We need to measure amounts of stuff, weight, volume, area etcetera, which requires a unit of measurement and fractional numbers.
      We need to describe positions, levels and changes, temperature, voltage, height, which leads to signed numbers (positive and negative).

      These are three very different types of activity, and the simple minded idea that each of these number systems is simply an extension of the previous is not helpful to the understanding of what is going on.

      There is a big difference between 3 apples and 3 feet.
      There is an even bigger difference between 3 feet and 3 volts.

      Algebra assumes that we are working in the signed number system, although some of the quantities involved, when algebra is applied to the real world, may be amounts, or counts. (Diophantine equations excepted).

      With operations difficulties can arise unless we are very careful.
      The worst case is “subtraction”. In the counting numbers it means “take away”.
      In the measuring of amounts it means “cut off” or “pour away”.
      In the measurement of position or level it means “lower by”.
      The sign of a signed number says “above” or “below” zero, and also it specifies the direction of a change.

      Here is my extract from A N Whitehead’s “An Introduction to Mathematics” (1911). It’s a good read.

      http://howardat58.files.wordpress.com/2014/08/whitehead-intro-to-math-negative-nos.doc

      Like

      • Joseph Nebus 6:52 pm on Friday, 12 September, 2014 Permalink | Reply

        That Whitehead extract is a very good read, yes. I like the precise outlining of the different ways we might mean signed numbers to be; it is probably a slipping of intuitive feeling between one model of negative numbers and the others that causes a lot of trouble working with them.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: