Tagged: temperature Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 3:00 pm on Monday, 16 May, 2016 Permalink | Reply
    Tags: , , , , temperature,   

    Reading the Comics, May 12, 2016: No Pictures Again Edition 


    I’ve hardly stopped reading the comics. I doubt I could even if I wanted at this point. But all the comics this bunch are from GoComics, which as far as I’m aware doesn’t turn off access to comic strips after a couple of weeks. So I don’t quite feel justified including the images of the comics when you can just click links to them instead.

    It feels a bit barren, I admit. I wonder if I shouldn’t commission some pictures so I have something for visual appeal. There’s people I know who do comics online. They might be able to think of something to go alongside every “Student has snarky answer for a word problem” strip.

    Brian and Ron Boychuk’s The Chuckle Brothers for the 8th of May drops in an absolute zero joke. Absolute zero’s a neat concept. People became aware of it partly by simple extrapolation. Given that the volume of a gas drops as the temperature drops, is there a temperature at which the volume drops to zero? (It’s complicated. But that’s the thread I use to justify pointing out this strip here.) And people also expected there should be an absolute temperature scale because it seemed like we should be able to describe temperature without tying it to a particular method of measuring it. That is, it would be a temperature “absolute” in that it’s not explicitly tied to what’s convenient for Western Europeans in the 19th century to measure. That zero and that instrument-independent temperature idea get conflated, and reasonably so. Hasok Chang’s Inventing Temperature: Measurement and Scientific Progress is well-worth the read for people who want to understand absolute temperature better.

    Gene Weingarten, Dan Weingarten & David Clark’s Barney and Clyde for the 9th is another strip that seems like it might not belong here. While it’s true that accidents sometimes lead to great scientific discoveries, what has that to do with mathematics? And the first thread is that there are mathematical accidents and empirical discoveries. Many of them are computer-assisted. There is something that feels experimental about doing a simulation. Modern chaos theory, the study of deterministic yet unpredictable systems, has at its founding myth Edward Lorentz discovering that tiny changes in a crude weather simulation program mattered almost right away. (By founding myth I don’t mean that it didn’t happen. I just mean it’s become the stuff of mathematics legend.)

    But there are other ways that “accidents” can be useful. Monte Carlo methods are often used to find extreme — maximum or minimum — solutions to complicated systems. These are good if it’s hard to find a best possible answer, but it’s easy to compare whether one solution is better or worse than another. We can get close to the best possible answer by picking an answer at random, and fiddling with it at random. If we improve things, good: keep the change. You can see why this should get us pretty close to a best-possible-answer soon enough. And if we make things worse then … usually but not always do we reject the change. Sometimes we take this “accident”. And that’s because if we only take improvements we might get caught at a local extreme. An even better extreme might be available but only by going down an initially unpromising direction. So it’s worth allowing for some “mistakes”.

    Mark Anderson’s Andertoons for the 10th of Anderson is some wordplay on volume. The volume of boxes is an easy formula to remember and maybe it’s a boring one. It’s enough, though. You can work out the volume of any shape using just the volume of boxes. But you do need integral calculus to tell how to do it. So maybe it’s easier to memorize the formula for volumes of a pyramid and a sphere.

    Berkeley Breathed’s Bloom County for the 10th of May is a rerun from 1981. And it uses a legitimate bit of mathematics for Milo to insult Freida. He calls her a “log 10 times 10 to the derivative of 10,000”. The “log 10” is going to be 1. A reference to logarithm, without a base attached, means either base ten or base e. “log” by itself used to invariably mean base ten, back when logarithms were needed to do ordinary multiplication and division and exponentiation. Now that we have calculators for this mathematicians have started reclaiming “log” to mean the natural logarithm, base e, which is normally written “ln”, but that’s still an eccentric use. Anyway, the logarithm base ten of ten is 1: 10 is equal to 10 to the first power.

    10 to the derivative of 10,000 … well, that’s 10 raised to whatever number “the derivative of 10,000” is. Derivatives take us into calculus. They describe how much a quantity changes as one or more variables change. 10,000 is just a number; it doesn’t change. It’s called a “constant”, in another bit of mathematics lingo that reminds us not all mathematics lingo is hard to understand. Since it doesn’t change, its derivative is zero. As anything else changes, the constant 10,000 does not. So the derivative of 10,000 is zero. 10 to the zeroth power is 1.

    So, one times one is … one. And it’s rather neat that kids Milo’s age understand derivatives well enough to calculate that.

    Ruben Bolling’s Super-Fun-Pak Comix rerun for the 10th happens to have a bit of graph theory in it. One of Uncle Cap’n’s Puzzle Pontoons is a challenge to trace out a figure without retracting a line or lifting your pencil. You can’t, not this figure. One of the first things you learn in graph theory teaches how to tell, and why. And thanks to a Twitter request I’m figuring to describe some of that for the upcoming Theorem Thursdays project. Watch this space!

    Charles Schulz’s Peanuts Begins for the 11th, a rerun from the 6th of February, 1952, is cute enough. It’s one of those jokes about how a problem seems intractable until you’ve found the right way to describe it. I can’t fault Charlie Brown’s thinking here. Figuring out a way the problems are familiar and easy is great.

    Shaenon K Garrity and Jeffrey C Wells’s Skin Horse for the 12th is a “see, we use mathematics in the real world” joke. In this case it’s triangles and triangulation. That’s probably the part of geometry it’s easiest to demonstrate a real-world use for, and that makes me realize I don’t remember mathematics class making use of that. I remember it coming up some, particularly in what must have been science class when we built and launched model rockets. We used a measure of how high an angle the rocket reached, and knowledge of how far the observing station was from the launchpad. But that wasn’t mathematics class for some reason, which is peculiar.

     
  • Joseph Nebus 2:34 am on Monday, 15 December, 2014 Permalink | Reply
    Tags: , , , degree symbol, degrees, , , , , temperature   

    Reading the Comics, December 14, 2014: Pictures Gone Again? Edition 


    I’ve got enough comics to do a mathematics-comics roundup post again, but none of them are the King Features or Creators or other miscellaneous sources that demand they be included here in pictures. I could wait a little over three hours and give the King Features Syndicate comics another chance to say anything on point, or I could shrug and go with what I’ve got. It’s a tough call. Ah, what the heck; besides, it’s been over a week since I did the last one of these.

    Bill Amend’s FoxTrot (December 7) bids to get posted on mathematics teachers’ walls with a bit of play on two common uses of the term “degree”. It’s also natural to wonder why the same word “degree” should be used to represent the units of temperature and the size of an angle, to the point that they even use the same symbol of a tiny circle elevated from the baseline as a shorthand representation. As best I can make out, the use of the word degree traces back to Old French, and “degré”, meaning a step, as in a stair. In Middle English this got expanded to the notion of one of a hierarchy of steps, and if you consider the temperature of a thing, or the width of an angle, as something that can be grown or shrunk then … I’m left wondering if the Middle English folks who extended “degree” to temperatures and angles thought there were discrete steps by which either quantity could change.

    As for the little degree symbol, Florian Cajori notes in A History Of Mathematical Notations that while the symbol (and the ‘ and ” for minutes and seconds) can be found in Ptolemy (!), in describing Babylonian sexagesimal fractions, this doesn’t directly lead to the modern symbols. Medieval manuscripts and early printed books would use abbreviations of Latin words describing what the numbers represented. Cajori rates as the first modern appearance of the degree symbol an appendix, composed by one Jacques Peletier, to the 1569 edition of the text Arithmeticae practicae methods facilis by Gemma Frisius (you remember him; the guy who made triangulation into something that could be used for surveying territories). Peletier was describing astronomical fractions, and used the symbol to denote that the thing before it was a whole number. By 1571 Erasmus Reinhold (whom you remember from working out the “Prutenic Tables”, updated astronomical charts that helped convince people of the use of the Copernican model of the solar system and advance the cause of calendar reform) was using the little circle to represent degrees, and Tycho Brahe followed his example, and soon … well, it took a century or so of competing symbols, including “Grad” or “Gr” or “G” to represent degree, but the little circle eventually won out. (Assume the story is more complicated than this. It always is.)

    Mark Litzer’s Joe Vanilla (December 7) uses a panel of calculus to suggest something particularly deep or intellectually challenging. As it happens, the problem isn’t quite defined well enough to solve, but if you make a reasonable assumption about what’s meant, then it becomes easy to say: this expression is “some infinitely large number”. Here’s why.

    The numerator is the integral \int_{0}^{\infty} e^{\pi} + \sin^2\left(x\right) dx . You can think of the integral of a positive-valued expression as the area underneath that expression and between the lines marked by, on the left, x = 0 (the number on the bottom of the integral sign), and on the right, x = \infty (the number on the top of the integral sign). (You know that it’s x because the integral symbol ends with “dx”; if it ended “dy” then the integral would tell you the left and the right bounds for the variable y instead.) Now, e^{\pi} + \sin^2\left(x\right) is a number that depends on x, yes, but which is never smaller than e^{\pi} (about 23.14) nor bigger than e^{\pi} + 1 (about 24.14). So the area underneath this expression has to be at least as big as the area within a rectangle that’s got a bottom edge at y = 0, a top edge at y = 23, a left edge at x = 0, and a right edge at x infinitely far off to the right. That rectangle’s got an infinitely large area. The area underneath this expression has to be no smaller than that.

    Just because the numerator’s infinitely large doesn’t mean that the fraction is, though. It’s imaginable that the denominator is also infinitely large, and more wondrously, is large in a way that makes the ratio some more familiar number like “3”. Spoiler: it isn’t.

    Actually, as it is, the denominator isn’t quite much of anything. It’s a summation; that’s what the capital sigma designates there. By convention, the summation symbol means to evaluate whatever expression there is to the right of it — in this case, it’s x^{\frac{1}{e}} + \cos\left(x\right) — for each of a series of values of some index variable. That variable is normally identified underneath the sigma, with a line such as x = 1, and (again by convention) for x = 2, x = 3, x = 4, and so on, until x equals whatever the number on top of the sigma is. In this case, the bottom doesn’t actually say what the index should be, although since “x” is the only thing that makes sense as a variable within the expression — “cos” means the cosine function, and “e” means the number that’s about 2.71828 unless it’s otherwise made explicit — we can suppose that this is a normal bit of shorthand like you use when context is clear.

    With that assumption about what’s meant, then, we know the denominator is whatever number is represented by \left(1^{\frac{1}{e}} + \cos\left(1\right)\right) + \left(2^{\frac{1}{e}} + \cos\left(2\right)\right) + \left(3^{\frac{1}{e}} + \cos\left(3\right)\right) + \left(4^{\frac{1}{e}} + \cos\left(4\right)\right) +  \cdots + \left(10^{\frac{1}{e}} + \cos\left(10\right)\right) (and 1/e is about 0.368). That’s a number about 16.549, which falls short of being infinitely large by an infinitely large amount.

    So, the original fraction shown represents an infinitely large number.

    Mark Tatulli’s Lio (December 7) is another “anthropomorphic numbers” genre comic, and since it’s Lio the numbers naturally act a bit mischievously.

    Greg Evans’s Luann Againn (December 7, I suppose technically a rerun) only has a bit of mathematical content, as it’s really playing more on short- and long-term memories. Normal people, it seems, have a buffer of something around eight numbers that they can remember without losing track of them, and it’s surprisingly easy to overload that. I recall reading, I think in Joseph T Hallinan’s Why We Make Mistakes: How We Look Without Seeing, Forget Things In Seconds, And Are All Pretty Sure We are Way Above Average, and don’t think I’m not aware of how funny it would be if I were getting this source wrong, that it’s possible to cheat a little bit on the size of one’s number-buffer.

    Hallinan (?) gave the example of a runner who was able to remember strings of dozens of numbers, well past the norm, but apparently by the trick of parsing numbers into plausible running times. That is, the person would remember “834126120820” perfectly because it could be expressed as four numbers, “8:34, 1:26, 1:20, 8:20”, that might be credible running times for something or other and the runner was used to remembering such times. Supporting the idea that this trick was based on turning a lot of digits into a few small numbers was that the runner would be lost if the digits could not be parsed into a meaningful time, like, “489162693077”. So, in short, people are really weird in how they remember and don’t remember things.

    Harley Schwadron’s 9 to 5 (December 8) is a “reluctant student” question who, in the tradition of kids in comic strips, tosses out the word “app” in the hopes of upgrading the action into a joke. I’m sympathetic to the kid not wanting to do long division. In arithmetic the way I was taught it, this was the first kind of problem where you pretty much had to approximate and make a guess what the answer might be and improve your guess from that starting point, and that’s a terrifying thing when, up to that point, arithmetic has been a series of predictable, discrete, universally applicable rules not requiring you to make a guess. It feels wasteful of effort to work out, say, what seven times your divisor is when it turns out it’ll go into the dividend eight times. I am glad that teaching approaches to arithmetic seem to be turning towards “make approximate or estimated answers, and try to improve those” as a general rule, since often taking your best guess and then improving it is the best way to get a good answer, not just in long division, and the less terrifying that move is, the better.

    Justin Boyd’s Invisible Bread (December 12) reveals the joy and the potential menace of charts and graphs. It’s a reassuring red dot at the end of this graph of relevant-graph-probabilities.

    Several comics chose to mention the coincidence of the 13th of December being (in the United States standard for shorthand dating) 12-13-14. Chip Sansom’s The Born Loser does the joke about how yes, this sequence won’t recur in (most of our) lives, but neither will any other. Stuart Carlson and Jerry Resler’s Gray Matters takes a little imprecision in calling it “the last date this century to have a consecutive pattern”, something the Grays, if the strip is still running, will realize on 1/2/34 at the latest. And Francesco Marciuliano’s Medium Large uses the neat pattern of the dates as a dip into numerology and the kinds of manias that staring too closely into neat patterns can encourage.

     
    • ivasallay 8:58 am on Monday, 15 December, 2014 Permalink | Reply

      I love the Foxtrot panel.
      How I would remember 49 – 61 – 32: First 7^2 – then the largest prime number less than 8^2 – Finally 2^5.
      The Born Loser makes a good point.
      Thanks for reading all those thousands of comics and sharing with us!

      Like

      • Joseph Nebus 10:42 pm on Monday, 15 December, 2014 Permalink | Reply

        Oh, now, I forgot somehow to mention how Charlie Brown remembers his locker combinations. He remembers the names of baseball players whose uniform numbers are the ones he wants, which is a good scheme for people who remember names well.

        Like

    • fluffy 5:47 pm on Monday, 15 December, 2014 Permalink | Reply

      Thanks to not-so-smart quotes, you ended up with the wrong symbols for minutes (‘) and seconds (“). Yet another case where pervasive assistive technology hurts more than it helps.

      Like

      • fluffy 5:47 pm on Monday, 15 December, 2014 Permalink | Reply

        And ironically, WordPress decided to “help” me too by “smartifying” them. Argh.

        Let’s see if HTML entities work: ' "

        Like

        • Joseph Nebus 10:43 pm on Monday, 15 December, 2014 Permalink | Reply

          I actually did see WordPress inappropriately smarting-up my quotes when I previewed, but I figured that trying to fix it would be too much work by requiring any effort on my part. I’m sorry to have bothered you enough that you had to dig into HTML entities over it.

          Like

          • fluffy 3:16 pm on Tuesday, 16 December, 2014 Permalink | Reply

            Eh, I know the major entities by heart. ', ", and, while we’re at it, &.

            Like

            • Joseph Nebus 8:35 pm on Wednesday, 17 December, 2014 Permalink | Reply

              Oh, I could hardly not remember &. I just use it too much. The quote mark and apostrophes I haven’t used content-management systems long enough to need to deal with.

              Like

    • Aquileana 2:18 am on Tuesday, 16 December, 2014 Permalink | Reply

      Great overview… So clever and enjoyable at the same time.
      Sending you all my best wishes!. Aquileana :D

      Like

  • Joseph Nebus 9:40 pm on Thursday, 8 May, 2014 Permalink | Reply
    Tags: , Kelvin, , , temperature,   

    The ideal gas equation 


    I did want to mention that the CarnotCycle big entry for the month is “The Ideal Gas Equation”. The Ideal Gas equation is one of the more famous equations that isn’t F = ma or E = mc2, which I admit is’t a group of really famous equations; but, at the very least, its content is familiar enough.

    If you keep a gas at constant temperature, and increase the pressure on it, its volume decreases, and vice-versa, known as Boyle’s Law. If you keep a gas at constant volume, and decrease its pressure, its temperature decreases, and vice-versa, known as Gay-Lussac’s law. Then Charles’s Law says if a gas is kept at constant pressure, and the temperature increases, then the volume increases, and vice-versa. (Each of these is probably named for the wrong person, because they always are.) The Ideal Gas equation combines all these relationships into one, neat, easily understood package.

    Peter Mander describes some of the history of these concepts and equations, and how they came together, with the interesting way that they connect to the absolute temperature scale, and of absolute zero. Absolute temperatures — Kelvin — and absolute zero are familiar enough ideas these days that it’s difficult to remember they were ever new and controversial and intellectually challenging ideas to develop. I hope you enjoy.

    Like

    carnotcycle

    es01

    If you received formal tuition in physical chemistry at school, then it’s likely that among the first things you learned were the 17th/18th century gas laws of Mariotte and Gay-Lussac (Boyle and Charles in the English-speaking world) and the equation that expresses them: PV = kT.

    It may be that the historical aspects of what is now known as the ideal (perfect) gas equation were not covered as part of your science education, in which case you may be surprised to learn that it took 174 years to advance from the pressure-volume law PV = k to the combined gas law PV = kT.

    es02

    The lengthy timescale indicates that putting together closely associated observations wasn’t regarded as a must-do in this particular era of scientific enquiry. The French physicist and mining engineer Émile Clapeyron eventually created the combined gas equation, not for its own sake, but because he needed an…

    View original post 1,628 more words

     
  • Joseph Nebus 12:48 am on Thursday, 3 October, 2013 Permalink | Reply
    Tags: , , quantum field theory, , , temperature   

    From ElKement: May The Force Field Be With You 


    I’m derelict in mentioning this but ElKement’s blog, Theory And Practice Of Trying To Combine Just Anything, has published the second part of a non-equation-based description of quantum field theory. This one, titled “May The Force Field Be With You: Primer on Quantum Mechanics and Why We Need Quantum Field Theory”, is about introducing the idea of a field, and a bit of how they can be understood in quantum mechanics terms.

    A field, in this context, means some quantity that’s got a defined value for every point in space and time that you’re studying. As ElKement notes, the temperature is probably the most familiar to people. I’d imagine that’s partly because it’s relatively easy to feel the temperature change as one goes about one’s business — after all, gravity is also a field, but almost none of us feel it appreciably change — and because weather maps make the changes of that in space and in time available in attractive pictures.

    The thing the field contains can be just about anything. The temperature would be just a plain old number, or as mathematicians would have it a “scalar”. But you can also have fields that describe stuff like the pull of gravity, which is a certain amount of pull and pointing, for us, toward the center of the earth. You can also have fields that describe, for example, how quickly and in what direction the water within a river is flowing. These strengths-and-directions are called “vectors” [1], and a field of vectors offers a lot of interesting mathematics and useful physics. You can also plunge into more exotic mathematical constructs, but you don’t have to. And you don’t need to understand any of this to read ElKement’s more robust introduction to all this.

    [1] The independent student newspaper for the New Jersey Institute of Technology is named The Vector, and has as motto “With Magnitude and Direction Since 1924”. I don’t know if other tech schools have newspapers which use a similar joke.

     
    • elkement 6:22 am on Thursday, 3 October, 2013 Permalink | Reply

      Thanks again for your kind pingback and publicity :-)
      I need to get to vectors and tensors in the next post(s) but I am still trying to figure out how to do this without mentioning those terms. Fluid dynamics is often a good starting point, e.g. to introduce, ‘derive’ or better motivate Schrödinger’s equation. On the other hand Feynman used to plunge directly into path integrals – presenting them as a rule along the lines of “This is the way nature works – live with it” – and deriving Schrödinger’s equation later.

      Like

      • Joseph Nebus 3:20 am on Saturday, 5 October, 2013 Permalink | Reply

        I’m not quite sure how I’d do either. I think I could probably explain vectors without having to use mathematical symbolism, since the idea of stuff moving at particular speeds in directions can call on physical intuition. Tensors I don’t know how I’d try to explain in popular terms, partly because I’m not really as proficient in them as I should be. I probably need to think seriously about my own understanding of them.

        Like

        • elkement 6:26 pm on Monday, 7 October, 2013 Permalink | Reply

          I have also always considered easier to imagine the different aspects of a vector – the abstract object and the ‘arrow’ as it lives in a specific base. But how do you really imagine the ‘abstract tensor object’ – in contrast to a ‘matrix’ (with more than 3 dimensions probably…)

          I have started to read about general relativity (… will finish after I have finally understood the Higgs…) and it took me quite a while to comprehend that you are not allowed to shift a vector in curved space as you shift the ‘arrow’. Actually it made me think about vectors in a new way…

          Like

          • Joseph Nebus 2:46 am on Friday, 18 October, 2013 Permalink | Reply

            (I’m embarrassed that I lost this comment somehow.)

            I can sort of reconstruct the process when I think I started to get vectors as a concept, particularly in thinking of them as not tied to some particular point, or even containing information about a point, but somehow floating freely off that. If I get around to trying to explain vectors I might even be able to make all that explicit again.

            Tensors I keep feeling like I’m on the verge of having that intuitive leap to where I have some mental model for how they work but I keep finding I don’t quite do enough work with them that it gets past following the rules and into really understanding the rules.

            Like

  • Joseph Nebus 9:44 pm on Friday, 21 June, 2013 Permalink | Reply
    Tags: , , , , , temperature   

    Where Do Negative Numbers Come From? 


    Some time ago — and I forget when, I’m embarrassed to say, and can’t seem to find it because the search tool doesn’t work on comments — I was asked about how negative numbers got to be accepted. That’s a great question, particularly since while it seems like the idea of positive numbers is probably lost in prehistory, negative numbers definitely progressed in the past thousand years or so from something people might wildly speculate about to being a reasonably comfortable part of daily mathematics.

    While searching for background information I ran across a doctoral thesis, Making Sense Of Negative Numbers, which is uncredited in the PDF I just linked to but appears to be by Dr Cecilia Kilhamn, of the University of Gothenburg, Sweden. Dr Kilhamn’s particular interest (here) is in how people learn to use negative numbers, so most of the thesis is about the conceptual difficulties people have when facing the minus sign (not least because it serves two roles, of marking a number as negative and of marking the subtraction operation), but the first chapters describe the historical process of developing the concept of negative numbers.

    (More …)

     
    • Peter Mander 6:21 pm on Saturday, 22 June, 2013 Permalink | Reply

      It was me. And thank you for a fascinating post.

      (Reading the Comics, Feb 26)

      Like

      • Joseph Nebus 5:15 am on Friday, 28 June, 2013 Permalink | Reply

        Oh, thank you so. I had a feeling it was one of the Reading the Comics threads but couldn’t pin down which.

        I’m really quite interested in trying to understand different models people had for negative numbers. References I’ve seen to people hypothesizing that negative numbers were larger than positive ones are intriguing, particularly when I think about the statistical mechanics definition of temperature.

        Like

    • Steve Morris 3:21 pm on Wednesday, 5 March, 2014 Permalink | Reply

      I am surprised that there was opposition so recently in history. I wonder when imaginary numbers came to be accepted?

      Like

      • Joseph Nebus 5:04 am on Thursday, 6 March, 2014 Permalink | Reply

        Folklore in the mathematics department is that imaginary numbers still aren’t really accepted at least over in the electrical engineering department.

        (Electrical engineers, the mathematics folks say, use ‘j’ rather than ‘i’ to denote imaginary numbers, which are a convenient way to represent properties of alternating currents. The lore is that this is because electrical engineers won’t put up with ‘imaginary’ numbers because they’re not real, although the notion that this is because ‘i’, the symbol, is already doing heavy enough work representing quantities like current is more compelling to me.)

        Like

    • howardat58 12:55 pm on Tuesday, 9 September, 2014 Permalink | Reply

      In the real world:
      We count, which generates the need for numbers, the natural numbers (including zero).
      We need to measure amounts of stuff, weight, volume, area etcetera, which requires a unit of measurement and fractional numbers.
      We need to describe positions, levels and changes, temperature, voltage, height, which leads to signed numbers (positive and negative).

      These are three very different types of activity, and the simple minded idea that each of these number systems is simply an extension of the previous is not helpful to the understanding of what is going on.

      There is a big difference between 3 apples and 3 feet.
      There is an even bigger difference between 3 feet and 3 volts.

      Algebra assumes that we are working in the signed number system, although some of the quantities involved, when algebra is applied to the real world, may be amounts, or counts. (Diophantine equations excepted).

      With operations difficulties can arise unless we are very careful.
      The worst case is “subtraction”. In the counting numbers it means “take away”.
      In the measuring of amounts it means “cut off” or “pour away”.
      In the measurement of position or level it means “lower by”.
      The sign of a signed number says “above” or “below” zero, and also it specifies the direction of a change.

      Here is my extract from A N Whitehead’s “An Introduction to Mathematics” (1911). It’s a good read.

      http://howardat58.files.wordpress.com/2014/08/whitehead-intro-to-math-negative-nos.doc

      Like

      • Joseph Nebus 6:52 pm on Friday, 12 September, 2014 Permalink | Reply

        That Whitehead extract is a very good read, yes. I like the precise outlining of the different ways we might mean signed numbers to be; it is probably a slipping of intuitive feeling between one model of negative numbers and the others that causes a lot of trouble working with them.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: