Updates from March, 2016 Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 3:00 pm on Friday, 11 March, 2016 Permalink | Reply
    Tags: , , , , ,   

    A Leap Day 2016 Mathematics A To Z: Fractions (Continued) 


    Another request! I was asked to write about continued fractions for the Leap Day 2016 A To Z. The request came from Keilah, of the Knot Theorist blog. But I’d already had a c-word request in (conjecture). So you see my elegant workaround to talk about continued fractions anyway.

    Fractions (continued).

    There are fashions in mathematics. There are fashions in all human endeavors. But mathematics almost begs people to forget that it is a human endeavor. Sometimes a field of mathematics will be popular a while and then fade. Some fade almost to oblivion. Continued fractions are one of them.

    A continued fraction comes from a simple enough starting point. Start with a whole number. Add a fraction to it. 1 + \frac{2}{3}. Everyone knows what that is. But then look at the denominator. In this case, that’s the ‘3’. Why couldn’t that be a sum, instead? No reason. Imagine then the number 1 + \frac{2}{3 + 4}. Is there a reason that we couldn’t, instead of the ‘4’ there, have a fraction instead? No reason beyond our own timidity. Let’s be courageous. Does 1 + \frac{2}{3 + \frac{4}{5}} even mean anything?

    Well, sure. It’s getting a little hard to read, but 3 + \frac{4}{5} is a fine enough number. It’s 3.8. \frac{2}{3.8} is a less friendly number, but it’s a number anyway. It’s a little over 0.526. (It takes a fair number of digits past the decimal before it ends, but trust me, it does.) And we can add 1 to that easily. So 1 + \frac{2}{3 + \frac{4}{5}} means a number a slight bit more than 1.526.

    Dare we replace the “5” in that expression with a sum? Better, with the sum of a whole number and a fraction? If we don’t fear being audacious, yes. Could we replace the denominator of that with another sum? Yes. Can we keep doing this forever, creating this never-ending stack of whole numbers plus fractions? … If we want an irrational number, anyway. If we want a rational number, this stack will eventually end. But suppose we feel like creating an infinitely long stack of continued fractions. Can we do it? Why not? Who dares, wins!

    OK. Wins what, exactly?

    Well … um. Continued fractions certainly had a fashionable time. John Wallis, the 17th century mathematician famous for introducing the ∞ symbol, and for an interminable quarrel with Thomas Hobbes over Hobbes’s attempts to reform mathematics, did much to establish continuous fractions as a field of study. (He’s credited with inventing the field. But all claims to inventing something big are misleading. Real things are complicated and go back farther than people realize, and inventions are more ambiguous than people think.) The astronomer Christiaan Huygens showed how to use continued fractions to design better gear ratios. This may strike you as the dullest application of mathematics ever. Let it. It’s also important stuff. People who need to scale one movement to another need this.

    In the 18th and 19th century continued fractions became interesting for higher mathematics. Continued fractions were the approach Leonhard Euler used to prove that e had to be irrational. That’s one of the superstar numbers of mathematics. Johan Heinrich Lambert used this to show that if θ is a rational number (other than zero) then the tangent of θ must be irrational. This is one path to showing that π must be irrational. Many of the astounding theorems of Srinivasa Ramanujan were about continued fractions, or ideas which built on continued fractions.

    But since the early 20th century the field’s evaporated. I don’t have a good answer why. The best speculation I’ve heard is that the field seems to fit poorly into any particular topic. Continued fractions get interesting when you have an infinitely long stack of nesting denominators. You don’t want to work with infinitely long strings of things before you’ve studied calculus. You have to be comfortable with these things. But that means students don’t encounter it until college, at least. And at that point fractions seem beneath the grade level. There’s a handful of proofs best done by them. But those proofs can be shown as odd, novel approaches to these particular problems. Studying the whole field is hardly needed.

    So, perhaps because it seems like an odd fit, the subject’s dried up and blown away. Even enthusiasts seem to be resigned to its oblivion. Professor Adam Van Tyul, then at Queens University in Kingston, Ontario, composed a nice set of introductory pages about continued fractions. But the page is defunct. Dr Ron Knott has a more thorough page, though, and one with calculators that work well.

    Will continued fractions make a comeback? Maybe. It might take the discovery of some interesting new results, or some better visualization tools, to reignite interest. Chaos theory, the study of deterministic yet unpredictable systems, first grew (we now recognize) in the 1890s. But it fell into obscurity. When we got some new theoretical papers and the ability to do computer simulations, it flowered again. For a time it looked ready to take over all mathematics, although we’ve got things under better control now. Could continued fractions do the same? I’m skeptical, but won’t rule it out.

    Postscript: something you notice quickly with continued fractions is they’re a pain to typeset. We’re all right with 1 + \frac{2}{3 + \frac{4}{5}} . But after that the LaTeX engine that WordPress uses to render mathematical symbols is doomed. A real LaTeX engine gets another couple nested denominators in before the situation is hopeless. If you’re writing this out on paper, the way people did in the 19th century, that’s all right. But there’s no typing it out that way.

    But notation is made for us, not us for notation. If we want to write a continued fraction in which the numerators are all 1, we have a brackets shorthand available. In this we would write 2 + \frac{1}{3 + \frac{1}{4 + \cdots }} as [2; 3, 4, … ]. The numbers are the whole numbers added to the next level of fractions. Another option, and one that lends itself to having numerators which aren’t 1, is to write out a string of fractions. In this we’d write 2 + \frac{1}{3 +} \frac{1}{4 +} \frac{1}{\cdots + }. We have to trust people notice the + sign is in the denominator there. But if people know we’re doing continued fractions then they know to look for the peculiar notation.

     
    • gaurish 5:09 pm on Friday, 11 March, 2016 Permalink | Reply

      I disagree! Research in field of Continued Fractions never died, so no question of comeback. See following two books:

      (1).Continued Fractions by Aleksandr Yakovlevich Khinchin (1964)
      (2).Neverending Fractions: An Introduction to Continued Fractions by Jonathan Borwein, ‎Alf van der Poorten, ‎Jeffrey Shallit (2014)

      Like

      • Joseph Nebus 3:32 am on Monday, 14 March, 2016 Permalink | Reply

        I may be overstating things to say the field’s died. But I don’t remember it ever coming up in my own education, and I can’t — on a quick survey — find evidence of the subject being taught regularly at any of the colleges or universities I’ve had much to do with. It’s mentioned as one of the subjects for a special topics course offered every other year at Michigan State University. But that’s also at the end of the roster, where they usually list the things they’ll get to if there’s time, which there never is.

        And I know these aren’t the only books about continued fractions published recently, but 1964 isn’t all that recent. I am sure good people are finding interesting new results. But the field isn’t thriving the way, say, Monte Carlo methods, or wavelets, or KAM theory are.

        Like

        • gaurish 6:12 am on Monday, 14 March, 2016 Permalink | Reply

          Today I just skimmed through a paper on Continued Fractions published in Acta Arithmetica in September 2015. (https://goo.gl/CtXops) It’s recent I guess :-)

          Also, if you haven’t read the 1964 book I suggested in previous comment then you know nothing about continued fractions.

          You probably never dived deep into Number Theory, as I never dived deep into Differential Equations so I don’t know that KAM theory is an active field of research!

          At my Institute (in India), continued fractions are taught in 3rd semester and with decent details. In 2014, a paper on an unsolved problem in continued fractions (Zaremba’s Conjecture) appeared in Annals of Mathematics (http://annals.math.princeton.edu/2014/180-1/p03 )…..

          My whole point was: “If you don’t know something, it doesn’t mean that it doesn’t exist”.

          Like

          • Joseph Nebus 7:20 am on Wednesday, 16 March, 2016 Permalink | Reply

            I am happy to take correction. At least, I want to be happy to take correction. You’re right that I don’t know all that’s going on in mathematics — it’s remarkable I know anything that’s going on in mathematics — and I’d be a fool to say courses teaching the subject aren’t there. Thank you for letting me know there’s more in the field than I suspected.

            Like

    • KnotTheorist 8:21 pm on Friday, 11 March, 2016 Permalink | Reply

      Thanks for the informative post! I love reading about mathematical history.

      Like

  • Joseph Nebus 11:27 pm on Wednesday, 5 November, 2014 Permalink | Reply
    Tags: , Fourier Analysis, , , Joseph Fourier, , sinusoidal waves,   

    Echoing “Fourier Echoes Euler” 


    The above tweet is from the Analysis Fact of The Day feed, which for the 5th had a neat little bit taken from Joseph Fourier’s The Analytic Theory Of Heat, published 1822. Fourier was trying to at least describe the way heat moves through objects, and along the way he developed thing called Fourier series and a field called Fourier Analysis. In this we treat functions — even ones we don’t yet know — as sinusoidal waves, overlapping and interfering with and reinforcing one another.

    If we have infinitely many of these waves we can approximate … well, not every function, but surprisingly close to all the functions that might represent real-world affairs, and surprisingly near all the functions we’re interested in anyway. The advantage of representing functions as sums of sinusoidal waves is that sinusoidal waves are very easy to differentiate and integrate, and to add together those differentials and integrals, and that means we can turn problems that are extremely hard into problems that may be longer, but are made up of much easier parts. Since usually it’s better to do something that’s got many easy steps than it is to do something with a few hard ones, Fourier series and Fourier analysis are some of the things you get to know well as you become a mathematician.

    The “Fourier Echoes Euler” page linked here shows simply one nice, sweet result that Fourier proved in that major work. It demonstrates what you get if, for absolutely any real number x, you add together \cos\left(x\right) - \frac12 \cos\left(2x\right) + \frac13 \cos\left(3x\right) - \frac14 \cos\left(4x\right) + \frac15 \cos\left(5x\right) - \cdots et cetera. There’s one step in it — “integration by parts” — that you’ll have to remember from freshman calculus, or maybe I’ll get around to explaining that someday, but I would expect most folks reading this far could follow this neat result.

     
    • howardat58 12:10 am on Thursday, 6 November, 2014 Permalink | Reply

      Not too good ! The marauder link didn’t work.

      Like

      • Joseph Nebus 5:31 am on Thursday, 6 November, 2014 Permalink | Reply

        It doesn’t? I’m surprised, and sorry for the trouble.

        Are you referring to the link in the embedded tweet, or to the one that’s in my final paragraph? Both seem to work for me but goodness knows how WordPress shows things differently to me-as-blog-author than it does to other people.

        If the raw URL helps any http://www.mathmarauder.com/archives/227 should be it.

        Like

    • elkement 7:41 pm on Wednesday, 12 November, 2014 Permalink | Reply

      Great coincidence – I have just discovered that Excel can do Fourier transforms. I tried to find some hidden periodicities in the daily average ambient temperature, but the FFT has just a single peak, corresponding to 365 days :-)

      Like

      • Joseph Nebus 2:58 am on Thursday, 13 November, 2014 Permalink | Reply

        Oh, nice; I didn’t know Excel did that.

        I suppose it’s fair enough to have a strong peak at about 365 days. Next project: what does hourly temperature data tell us?

        Liked by 1 person

  • Joseph Nebus 5:27 pm on Friday, 17 October, 2014 Permalink | Reply
    Tags: , Elke Stangl, , , ,   

    How Richard Feynman Got From The Square Root of 2 to e 


    I wanted to bring to greater prominence something that might have got lost in comments. Elke Stangl, author of the Theory And Practice Of Trying To Combine Just Anything blog, noticed that among the Richard Feynman Lectures on Physics, and available online, is his derivation of how to discover e — the base of natural logarithms — from playing around.

    e is an important number, certainly, but it’s tricky to explain why it’s important; it hasn’t got a catchy definition like pi has, and even the description that most efficiently says why it’s interesting (“the base of the natural logarithm”) sounds perilously close to technobabble. As an explanation for why e should be interesting Feynman’s text isn’t economical — I make it out as something around two thousand words — but it’s a really good explanation since it starts from a good starting point.

    That point is: it’s easy to understand what you mean by raising a number, say 10, to a positive integer: 104, for example, is four tens multiplied together. And it doesn’t take much work to extend that to negative numbers: 10-4 is one divided by the product of four tens multiplied together. Fractions aren’t too bad either: 101/2 would be the number which, multiplied by itself, gives you 10. 103/2 would be 101/2 times 101/2 times 101/2; or if you think this is easier (it might be!), the number which, multiplied by itself, gives you 103. But what about the number 10^{\sqrt{2}} ? And if you can work that out, what about the number 10^{\pi} ?

    There’s a pretty good, natural way to go about writing that and as Feynman shows you find there’s something special about some particular number pretty close to 2.71828 by doing so.

     
    • elkement 7:30 pm on Friday, 17 October, 2014 Permalink | Reply

      So there is not an easy way to boil down this section to a very short post, is it? I suppose you would need to resort to Taylor’s expansions, I guess? But Feynman tried to do without explaining too much theoretical concepts upfront – so that’s probably why it takes more lines and one complete example…

      Like

      • howardat58 7:53 pm on Friday, 17 October, 2014 Permalink | Reply

        Taylor series do rather depend on calculus and the fact that d/dx(exp(x)) = exp(x), and he wanted to do it all without calculus.
        You can define e as the solution to the equation
        ln(x)=1, and with a proper definition of the natural log ( which needs the ideas of calculus) this will work.
        Check my recent post on this:

        http://howardat58.wordpress.com/

        Like

      • Joseph Nebus 5:54 pm on Saturday, 18 October, 2014 Permalink | Reply

        I don’t know that the section couldn’t be boiled down to something short, actually; I didn’t think to try. Probably it would be possible to get to the conclusion more quickly, but I think at the cost of giving up Feynman’s fairly clear intention to bring the reader there by a series of leading investigatory questions, of getting there the playful way.

        It’s a good writing exercise to consider, though, and I might give it a try.

        Like

    • howardat58 7:55 pm on Friday, 17 October, 2014 Permalink | Reply

      Joseph, I read the log bit and the complex number bit. The latter is excellent and should be plastered on the wall of every high school math classroom. Thanks.

      Like

  • Joseph Nebus 5:12 pm on Sunday, 14 September, 2014 Permalink | Reply
    Tags: , common logarithms, , , desert island problems, , , natural logarithms   

    Without Machines That Think About Logarithms 


    I’ve got a few more thoughts about calculating logarithms, based on how the Harvard IBM Automatic Sequence-Controlled Calculator did things, and wanted to share them. I also have some further thoughts coming up shortly courtesy my first guest blogger, which is exciting to me.

    The procedure that was used back then to compute common logarithms — logarithms base ten — was built on several legs: that we can work out some logarithms ahead of time, that we can work out the natural (base e) logarithm of a number using an infinite series, that we can convert the natural logarithm to a common logarithm by a single multiplication, and that the logarithm of the product of two (or more) numbers equals the sum of the logarithm of the separate numbers.

    From that we got a pretty nice, fairly slick algorithm for producing logarithms. Ahead of time you have to work out the logarithms for 1, 2, 3, 4, 5, 6, 7, 8, and 9; and then, to make things more efficient, you’ll want the logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9; for that matter, you’ll also want 1.01, 1.02, 1.03, 1.04, and so on to 1.09. You can get more accurate numbers quickly by working out the logarithms for three digits past the decimal — 1.001, 1.002, 1.003, 1.004, and so on — and for that matter to four digits (1.0001) and more. You’re buying either speed of calculation or precision of result with memory.

    The process as described before worked out common logarithms, although there isn’t much reason that it has to be those. It’s a bit convenient, because if you want the logarithm of 47.2286 you’ll want to shift that to the logarithm of 4.72286 plus the logarithm of 10, and the common logarithm of 10 is a nice, easy 1. The same logic works in natural logarithms: the natural logarithm of 47.2286 is the natural logarithm of 4.72286 plus the natural logarithm of 10, but the natural logarithm of 10 is a not-quite-catchy 2.3026 (approximately). You pretty much have to decide whether you want to deal with factors of 10 being an unpleasant number or do deal with calculating natural logarithms and then multiplying them by the common logarithm of e, about 0.43429.

    But the point is if you found yourself with no computational tools, but plenty of paper and time, you could reconstruct logarithms for any number you liked pretty well: decide whether you want natural or common logarithms. I’d probably try working out both, since there’s presumably the time, after all, and who knows what kind of problems I’ll want to work out afterwards. And I can get quite nice accuracy after working out maybe 36 logarithms using the formula:

    \log_e\left(1 + h\right) = h - \frac12 h^2 + \frac13 h^3 - \frac14 h^4 + \frac15 h^5 - \cdots

    This will work very well for numbers like 1.1, 1.2, 1.01, 1.02, and so on: for this formula to work, h has to be between -1 and 1, or put another way, we have to be looking for the logarithms of numbers between 0 and 2. And it takes fewer terms to get the result as precise as you want the closer h is to zero, that is, the closer the number whose logarithm we want is to 1.

    So most of my reference table is easy enough to make. But there’s a column left out: what is the logarithm of 2? Or 3, or 4, or so on? The infinite-series formula there doesn’t work that far out, and if you give it a try, let’s say with the logarithm of 5, you get a good bit of nonsense, numbers swinging positive and negative and ever-larger.

    Of course we’re not limited to formulas; we can think, too. 3, for example, is equal to 1.5 times 2, so the logarithm of 3 is the logarithm of 1.5 2 plus the logarithm of 2, and we have the logarithm of 1.5, and the logarithm of 2 is … OK, that’s a bit of a problem. But if we had the logarithm of 2, we’d be able to work out the logarithm of 4 — it’s just twice that — and we could get to other numbers pretty easily: 5 is, among other things, 2 times 2 times 1.25 so its logarithm is twice the logarithm of 2 plus the logarithm of 1.25. We’d have to work out the logarithm of 1.25, but we can do that by formula. 6 is 2 times 2 times 1.5, and we already had 1.5 worked out. 7 is 2 times 2 times 1.75, and we have a formula for the logarithm of 1.75. 8 is 2 times 2 times 2, so, triple whatever the logarithm of 2 is. 9 is 3 times 3, so, double the logarithm of 3.

    We’re not required to do things this way. I just picked some nice, easy ways to factor the whole numbers up to 9, and that didn’t seem to demand doing too much more work. I’d need the logarithms of 1.25 and 1.75, as well as 2, but I can use the formula or, for that matter, work it out using the rest of my table: 1.25 is 1.2 times 1.04 times 1.001 times 1.000602, approximately. But there are infinitely many ways to get 3 by multiplying together numbers between 1 and 2, and we can use any that are convenient.

    We do still need the logarithm of 2, but, then, 2 is among other things equal to 1.6 times 1.25, and we’d been planning to work out the logarithm of 1.6 all the time, and 1.25 is useful in getting us to 5 also, so, why not do that?

    So in summary we could get logarithms for any numbers we wanted by working out the logarithms for 1.1, 1.2, 1.3, and so on, and 1.01, 1.02, 1.03, et cetera, and 1.001, 1.002, 1.003 and so on, and then 1.25 and 1.75, which lets us work out the logarithms of 2, 3, 4, and so on up to 9.

    I haven’t yet worked out, but I am curious about, what the fewest number of “extra” numbers I’d have to calculate are. That is, granted that I have to figure out the logarithms of 1.1, 1.01, 1.001, et cetera anyway. The way I outlined things I have to also work out the logarithms of 1.25 and 1.75 to get all the numbers I need. Is it possible to figure out a cleverer bit of factorization that requires only one extra number be worked out? For that matter, is it possible to need no extra numbers? My instinctive response is to say no, but that’s hardly a proof. I’d be interested to know better.

     
  • Joseph Nebus 10:27 pm on Saturday, 6 September, 2014 Permalink | Reply
    Tags: , , , , ,   

    Machines That Give You Logarithms 


    As I’ve laid out the tools that the Harvard IBM Automatic Sequence-Controlled Calculator would use to work out a common logarithm, now I can show how this computer of the 1940s and 1950s would do it. The goal, remember, is to compute logarithms to a desired accuracy, using computers that haven’t got abundant memory, and as quickly as possible. As quickly as possible means, roughly, avoiding multiplication (which takes time) and doing as few divisions as can possibly be done (divisions take forever).

    As a reminder, the tools we have are:

    1. We can work out at least some logarithms ahead of time and look them up as needed.
    2. The natural logarithm of a number close to 1 is log_e\left(1 + h\right) = h - \frac12h^2 + \frac13h^3 - \frac14h^4 + \frac15h^5 - \cdots .
    3. If we know a number’s natural logarithm (base e), then we can get its common logarithm (base 10): multiply the natural logarithm by the common logarithm of e, which is about 0.43429.
    4. Whether the natural or the common logarithm (or any other logarithm you might like) \log\left(a\cdot b\cdot c \cdot d \cdots \right) = \log(a) + \log(b) + \log(c) + \log(d) + \cdots

    Now we’ll put this to work. The first step is which logarithms to work out ahead of time. Since we’re dealing with common logarithms, we only need to be able to work out the logarithms for numbers between 1 and 10: the common logarithm of, say, 47.2286 is one plus the logarithm of 4.72286, and the common logarithm of 0.472286 is minus two plus the logarithm of 4.72286. So we’ll start by working out the logarithms of 1, 2, 3, 4, 5, 6, 7, 8, and 9, and storing them in what, in 1944, was still a pretty tiny block of memory. The original computer using this could store 72 numbers at a time, remember, though to 23 decimal digits.

    So let’s say we want to know the logarithm of 47.2286. We have to divide this by 10 in order to get the number 4.72286, which is between 1 and 10, so we’ll need to add one to whatever we get for the logarithm of 4.72286 is. (And, yes, we want to avoid doing divisions, but dividing by 10 is a special case. The Automatic Sequence-Controlled Calculator stored numbers, if I am not grossly misunderstanding things, in base ten, and so dividing or multiplying by ten was as fast for it as moving the decimal point is for us. Modern computers, using binary arithmetic, find it as fast to divide or multiply by powers of two, even though division in general is a relatively sluggish thing.)

    We haven’t worked out what the logarithm of 4.72286 is. And we don’t have a formula that’s good for that. But: 4.72286 is equal to 4 times 1.1807, and therefore the logarithm of 4.72286 is going to be the logarithm of 4 plus the logarithm of 1.1807. We worked out the logarithm of 4 ahead of time (it’s about 0.60206, if you’re curious).

    We can use the infinite series formula to get the natural logarithm of 1.1807 to as many digits as we like. The natural logarithm of 1.1807 will be about 0.1807 - \frac12 0.1807^2 + \frac13 0.1807^3 - \frac14 0.1807^4 + \frac15 0.1807^5 - \cdots or 0.16613. Multiply this by the logarithm of e (about 0.43429) and we have a common logarithm of about 0.07214. (We have an error estimate, too: we’ve got the natural logarithm of 1.1807 within a margin of error of \frac16 0.1807^6 , or about 0.000 0058, which, multiplied by the logarithm of e, corresponds to a margin of error for the common logarithm of about 0.000 0025.

    Therefore: the logarithm of 47.2286 is about 1 plus 0.60206 plus 0.07214, which is 1.6742. And it is, too; we’ve done very well at getting the number just right considering how little work we really did.

    Although … that infinite series formula. That requires a fair number of multiplications, at least eight as I figure it, however you look at it, and those are sluggish. It also properly speaking requires divisions, although you could easily write your code so that instead of dividing by 4 (say) you multiply by 0.25 instead. For this particular example number of 47.2286 we didn’t need very many terms in the series to get four decimal digits of accuracy, but maybe we got lucky and some other number would have required dozens of multiplications. Can we make this process, on average, faster?

    And here’s one way to do it. Besides working out the common logarithms for the whole numbers 1 through 9, also work out the common logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9. And then …

    We started with 47.2286. Divide by 10 (a free bit of work) and we have 4.72286. Divide 4.72286 is 4 times 1.180715. And 1.180715 is equal to 1.1 — the whole number and the first digit past the decimal — times 1.07337. That is, 47.2286 is 10 times 4 times 1.1 times 1.07337. And so the logarithm of 47.2286 is the logarithm of 10 plus the logarithm of 4 plus the logarithm of 1.1 plus the logarithm of 1.07337. We are almost certainly going to need fewer terms in the infinite series to get the logarithm of 1.07337 than we need for 1.180715 and so, at the cost of one more division, we probably save a good number of multiplications.

    The common logarithm of 1.1 is about 0.041393. So the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) is 1.6435, which falls a little short of the actual logarithm we’d wanted, about 1.6742, but two or three terms in the infinite series should be enough to make that up.

    Or we could work out a few more common logarithms ahead of time: those for 1.01, 1.02, 1.03, and so on up to Our original 47.2286 divided by 10 is 4.72286. Divide that by the first number, 4, and you get 1.180715. Divide 1.180715 by 1.1, the first two digits, and you get 1.07337. Divide 1.07337 by 1.07, the first three digits, and you get 1.003156. So 47.2286 is 10 times 4 times 1.1 times 1.07 times 1.003156. So the common logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (about 0.02938) plus the logarithm of 1.003156 (to be determined). Even ignoring the to-be-determined part that adds up to 1.6728, which is a little short of the 1.6742 we want but is doing pretty good considering we’ve reduced the whole problem to three divisions, looking stuff up, and four additions.

    If we go a tiny bit farther, and also have worked out ahead of time the logarithms for 1.001, 1.002, 1.003, and so on out to 1.009, and do the same process all over again, then we get some better accuracy and quite cheaply yet: 47.2286 divided by 10 is 4.72286. 4.72286 divided by 4 is 1.180715. 1.180715 divided by 1.1 is 1.07337. 1.07337 divided by 1.07 is 1.003156. 1.003156 divided by 1.003 is 1.0001558.

    So the logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (0.029383) plus the logarithm of 1.003 (0.001301) plus the logarithm of 1.001558 (to be determined). Leaving aside the to-be-determined part, that adds up to 1.6741.

    And the to-be-determined part is great: if we used just a single term in this series, the margin for error would be, at most, 0.000 000 0052, which is probably small enough for practical purposes. The first term in the to-be-determined part is awfully easy to calculate, too: it’s just 1.0001558 – 1, that is, 0.0001558. Add that and we have an approximate logarithm of 1.6742, which is dead on.

    And I do mean dead on: work out more decimal places of the logarithm based on this summation and you get 1.674 205 077 226 78. That’s no more than five billionths away from the correct logarithm for the original 47.2286. And it required doing four divisions, one multiplication, and five additions. It’s difficult to picture getting such good precision with less work.

    Of course, that’s done in part by having stockpiled a lot of hard work ahead of time: we need to know the logarithms of 1, 1.1, 1.01, 1.001, and then 2, 1.2, 1.02, 1.002, and so on. That’s 36 numbers altogether and there are many ways to work out logarithms. But people have already done that work, and we can use that work to make the problems we want to do considerably easier.

    But there’s the process. Work out ahead of time logarithms for 1, 1.1, 1.01, 1.001, and so on, to whatever the limits of your patience. Then take the number whose logarithm you want and divide (or multiply) by ten until you get your working number into the range of 1 through 10. Divide out the first digit, which will be a whole number from 1 through 9. Divide out the first two digits, which will be something from 1.1 to 1.9. Divide out the first three digits, something from 1.01 to 1.09. Divide out the first four digits, something from 1.001 to 1.009. And so on. Then add up the logarithms of the power of ten you divided or multiplied by with the logarithm of the first divisor and the second divisor and third divisor and fourth divisor, until you run out of divisors. And then — if you haven’t already got the answer as accurately as you need — work out as many terms in the infinite series as you need; probably, it won’t be very many. Add that to your total. And you are, amazingly, done.

     
  • Joseph Nebus 10:05 pm on Wednesday, 3 September, 2014 Permalink | Reply
    Tags: , , , ,   

    Machines That Do Something About Logarithms 


    I’m going to assume everyone reading this accepts that logarithms are worth computing, and try to describe how Harvard’s IBM Automatic Sequence-Controlled Calculator would work them out.

    The first part of this is kind of an observation: the quickest way to give the logarithm of a number is to already know it. Looking it up in a table is way faster than evaluating it, and that’s as true for the computer as for you. Obviously we can’t work out logarithms for every number, what with there being so many of them, but we could work out the logarithms for a reasonable range and to a certain precision and trust that the logarithm of (say) 4.42286 is going to be tolerably close to the logarithm of 4.423 that we worked out ahead of time. Working out a range of, say, 1 to 10 for logarithms base ten is plenty, because that’s all the range we need: the logarithm base ten of 44.2286 is the logarithm base ten of 4.42286 plus one. The logarithm base ten of 0.442286 is the logarithm base ten of 4.42286 minus one. You can guess from that what the logarithm of 4,422.86 is, compared to that of 4.42286.

    This is trading computer memory for computational speed, which is often worth doing. But the old Automatic Sequence-Controlled Calculator can’t do that, at least not as easily as we’d like: it had the ability to store 72 numbers, albeit to 23 decimal digits. We can’t just use “worked it out ahead of time”, although we’re not going to abandon that idea either.

    The next piece we have is something useful if we want to work out the natural logarithm — the logarithm base e — of a number that’s close to 1. We have a formula that will let us work out this natural logarithm to whatever accuracy we want:

    \log_{e}\left(1 + h\right) = h - \frac12 h^2 + \frac13 h^3 - \frac14 h^4 + \frac15 h^5 - \cdots \mbox{ if } |h| < 1

    In principle, we have to add up infinitely many terms to get the answer right. In practice, we only add up terms until the error — the difference between our sum and the correct answer — is smaller than some acceptable margin. This seems to beg the question, because how can we know how big that error is without knowing what the correct answer is? In fact we don’t know just what the error is, but we do know that the error can’t be any larger than the absolute value of the first term we neglect.

    Let me give an example. Suppose we want the natural logarithm of 1.5, which the alert have noticed is equal to 1 + 0.5. Then h is 0.5. If we add together the first five terms of the natural logarithm series, then we have 0.5 - \frac12 0.5^2 + \frac13 0.5^3 - \frac14 0.5^4 + \frac15 0.5^5 which is approximately 0.40729. If we were to work out the next term in the series, that would be -\frac16 0.5^6 , which has an absolute value of about 0.0026. So the natural logarithm of 1.5 is 0.40729, plus or minus 0.0026. If we only need the natural logarithm to within 0.0026, that’s good: we’re done.

    In fact, the natural logarithm of 1.5 is approximately 0.40547, so our error is closer to 0.00183, but that’s all right. Few people complain that our error is smaller than what we estimated it to be.

    If we know what margin of error we’ll tolerate, by the way, then we know how many terms we have to calculate. Suppose we want the natural logarithm of 1.5 accurate to 0.001. Then we have to find the first number n so that \frac1n 0.5^n < 0.001 ; if I'm not mistaken, that's eight. Just how many terms we have to calculate will depend on what h is; the bigger it is — the farther the number is from 1 — the more terms we'll need.

    The trouble with this is that it’s only good for working out the natural logarithms of numbers between 0 and 2. (And it’s better the closer the number is to 1.) If you want the natural logarithm of 44.2286, you have to divide out the highest power of e that’s less than it — well, you can fake that by dividing by e repeatedly — and what you get is that it’s e times e times e times 2.202 and we’re stuck there. Not hopelessly, mind you: we could find the logarithm of 1/2.202, which will be minus the logarithm of 2.202, at least, and we can work back to the original number from there. Still, this is a bit of a mess. We can do better.

    The third piece we can use is one of the fundamental properties of logarithms. This is true for any base, as long as we use the same base for each logarithm in the equation here, and I’ve mentioned it in passing before:

    \log\left(a\cdot b\cdot c\cdot d \cdots\right) = \log\left(a\right) + \log\left(b\right) + \log\left(c\right) + \log\left(d\right) + \cdots

    That is, if we could factor a number whose logarithm we want into components which we can either look up or we can calculate very quickly, then we know its logarithm is the sum of the logarithms of those components. And this, finally, is how we can work out logarithms quickly and without too much hard work.

     
    • howardat58 12:54 am on Thursday, 4 September, 2014 Permalink | Reply

      If you take the terms of the expansion of log(1+x) in pairs things are better:
      Take h^3/3-h^4/4 and get h^3*(4-3*h)/12 for example
      These are all positive for h<1 and the series will converge much more quickly.
      ps my post will soon be with you

      Like

      • Joseph Nebus 12:14 am on Friday, 5 September, 2014 Permalink | Reply

        Pairing things up looks nice, although I’m not sure it saves work. It might make for more numerically stable calculations but I haven’t tested that. (To explain: calculations on the computer naturally include a bit of error, because, basically, what we might want to write as 1/3 we write on the computer as 0.333 and that’s a tiny bit different. Sometimes doing calculations in one order will magnify that little difference between what you want and what you actually compute. Sometimes changing the order will mean those little differences stay little, and that’s a stable computation.)

        I’m looking forward to your post and appreciate the writing. Thank you.

        Like

    • mommycookforme 2:18 pm on Thursday, 4 September, 2014 Permalink | Reply

      Wow! A mathematical genius! Thanks for sharing us, have a wonderful day!:)

      Like

  • Joseph Nebus 4:27 pm on Sunday, 24 August, 2014 Permalink | Reply
    Tags: , , , , ,   

    Machines That Think About Logarithms 


    I confess that I picked up Edmund Callis Berkeley’s Giant Brains: Or Machines That Think, originally published 1949, from the library shelf as a source of cheap ironic giggles. After all, what is funnier than an attempt to explain to a popular audience that, wild as it may be to contemplate, electrically-driven machines could “remember” information and follow “programs” of instructions based on different conditions satisfied by that information? There’s a certain amount of that, though not as much as I imagined, and a good amount of descriptions of how the hardware of different partly or fully electrical computing machines of the 1940s worked.

    But a good part, and the most interesting part, of the book is about algorithms, the ways to solve complicated problems without demanding too much computing power. This is fun to read because it showcases the ingenuity and creativity required to do useful work. The need for ingenuity will never leave us — we will always want to compute things that are a little beyond our ability — but to see how it’s done for a simple problem is instructive, if for nothing else to learn the kinds of tricks you can do to get the most of your computing resources.

    The example that most struck me and which I want to share is from the chapter on the IBM Automatic Sequence-Controlled Calculator, built at Harvard at a cost of “somewhere near 3 or 4 hundred thousand dollars, if we leave out some of the cost of research and development, which would have been done whether or not this particular machine had ever been built”. It started working in April 1944, and wasn’t officially retired until 1959. It could store 72 numbers, each with 23 decimal digits. Like most computers (then and now) it could do addition and subtraction very quickly, in the then-blazing speed of about a third of a second; it could do multiplication tolerably quickly, in about six seconds; and division, rather slowly, in about fifteen seconds.

    The process I want to describe is the taking of logarithms, and why logarithms should be interesting to compute takes a little bit of justification, although it’s implicitly there just in how fast calculations get done. Logarithms let one replace the multiplication of numbers with their addition, for a considerable savings in time; better, they let you replace the division of numbers with subtraction. They further let you turn exponentiation and roots into multiplication and division, which is almost always faster to do. Many human senses seem to work on a logarithmic scale, as well: we can tell that one weight is twice as heavy as the other much more reliably than we can tell that one weight is four pounds heavier than the other, or that one light is twice as bright as the other rather than is ten lumens brighter.

    What the logarithm of a number is depends on some other, fixed, quantity, known as the base. In principle any positive number will do as base; in practice, these days people mostly only care about base e (which is a little over 2.718), the “natural” logarithm, because it has some nice analytic properties. Back in the day, which includes when this book was written, we also cared about base 10, the “common” logarithm, because we mostly work in base ten. I have heard of people who use base 2, but haven’t seen them myself and must regard them as an urban legend. The other bases are mostly used by people who are writing homework problems for the part of the class dealing with logarithms. To some extent it doesn’t matter what base you use. If you work out the logarithm in one base, you can convert that to the logarithm in another base by a multiplication.

    The logarithm of some number in your base is the exponent you have to raise the base to to get your desired number. For example, the logarithm of 100, in base 10, is going to be 2 because 102 is 100, and the logarithm of e1/3 (a touch greater than 1.3956), in base e, is going to be 1/3. To dig deeper in my reserve of in-jokes, the logarithm of 2038, in base 10, is approximately 3.3092, because 103.3092 is just about 2038. The logarithm of e, in base 10, is about 0.4343, and the logarithm of 10, in base e, is about 2.303. Your calculator will verify all that.

    All that talk about “approximately” should have given you some hint of the trouble with logarithms. They’re only really easy to compute if you’re looking for whole powers of whatever your base is, and that if your base is 10 or 2 or something else simple like that. If you’re clever and determined you can work out, say, that the logarithm of 2, base 10, has to be close to 0.3. It’s fun to do that, but it’ll involve such reasoning as “two to the tenth power is 1,024, which is very close to ten to the third power, which is 1,000, so therefore the logarithm of two to the tenth power must be about the same as the logarithm of ten to the third power”. That’s clever and fun, but it’s hardly systematic, and it doesn’t get you many digits of accuracy.

    So when I pick up this thread I hope to explain one way to produce as many decimal digits of a logarithm as you could want, without asking for too much from your poor Automatic Sequence-Controlled Calculator.

     
    • Boxing Pythagoras 6:33 pm on Sunday, 24 August, 2014 Permalink | Reply

      I definitely need this book, now. Computers have been so naturally ingrained in our lives that people often forget how incredulous the concept once was.

      Like

      • Joseph Nebus 8:40 pm on Tuesday, 26 August, 2014 Permalink | Reply

        Oh, yes, people do forget how fantastical computers were until quite recently. This book goes into a lot of the hardware, though, which I think is great. There’s something wonderful in how a piece of metal can be made to remember a fact, and it’s not really explained anymore in introductions to computers even though that’s become much more standard than memory had been in the 1940s.

        I’d picked up the book from the university library. I haven’t checked how available and expensive it is as a used book.

        Like

    • chrisbrakeshow 3:04 am on Tuesday, 26 August, 2014 Permalink | Reply

      This is some pretty heavy stuff, man. Thanks for digging deep!

      -john

      Like

    • howardat58 3:43 am on Wednesday, 27 August, 2014 Permalink | Reply

      I am a bit stuck with the text only in the comment section but here goes:
      It started with logs-like-products so I wrote the chord slope function for log as
      (log(x + x*h) – log(x)/(x*h)
      which quickly and to my surprise became
      (1/x)*log(1+h)/h
      This provided a rationale for the formal definition of log(x) as
      integral(1/t) from t=1 to t=x
      I then thought to try the standard “add up the rectangles” approach, but with unequal widths, in a geometric progression.
      So for 5 intervals and k^5=x the interval points were 1, k, k^2, k^3, k^4 and k^5 (=x)
      The sum of the rectangles came to 5*(1 – 1/k) which eventually, via the trapezoidal rule, gave the n th estimate of log(x) as

      n(root(x,n) – 1/root(x,n))

      where root(x,n) is the nth root of x.

      I tried it out with n as a power of 2, so square rooting is the only messy bit, and with n=2^10 I got log(e) = 0.9999…..
      That is 4 dp in ten steps
      Not brilliant but it works, and I have never seen anything like this before.

      Like

      • Joseph Nebus 2:22 am on Thursday, 28 August, 2014 Permalink | Reply

        I’m still working out to my satisfaction the algebra behind this (in particular the formula n(root(x, n) – 1/root(x, n)) keeps coming out about twice the natural logarithm and I’m not sure just where the two comes in, but have suspicions), but I do believe you’re right about the basic approach. I believe that it’s structurally similar to the “method of exhaustion” that the Ancient Greeks would use to work out the areas underneath a curve, long before there was a calculus to work out these problems. In any case it’s a good scheme.

        Of course, the tradeoff for this is that you need to have the n-th root of the number whose logarithm you want. This might not be too bad, especially if you decide to use an n that’s a power of two, though.

        Like

        • howardat58 5:01 pm on Thursday, 28 August, 2014 Permalink | Reply

          Transcription error !
          The divisor of 2 was scribbled so small that I missed it.
          Yes, it should be divided by 2.
          More on a separate post, the lines are getting shorter.

          Like

          • Joseph Nebus 2:44 am on Friday, 29 August, 2014 Permalink | Reply

            I’m glad to have that sorted out.

            If you’d like, I’d be happy to set your work as a guest post on the main page. WordPress even allows the use of basic LaTeX commands … certainly in main posts; I’m not sure how it does in comments. (I’m afraid to try, given how hard it can be to get right the first time and that there’s no preview and editing of comments that I’ve found.)

            Like

            • howardat58 3:26 am on Friday, 29 August, 2014 Permalink | Reply

              I would like that. Thankyou.
              What I use is a very simple program called mathedit , which lets you put the stuff in picture form (media) and I figured out how to modify the display size to fill the width. Have a look at my latest post to see the effect.
              If I save any pics to my blogsite you can access them, put them in a post and so on using the url
              I was about to start on this, with explanation, anyway, and I have figured out a way of improving the trapezium rule method dramatically.
              If you email me at howard_at_58@yahoo.co.uk I can send you stuff directly to check out.
              Gracias

              Like

            • howardat58 3:32 am on Friday, 29 August, 2014 Permalink | Reply

              I am putting this as a separate reply as it might then be readable!!

              I would like that. Thankyou.
              What I use is a very simple program called mathedit , which lets you put the stuff in picture form (media) and I figured out how to modify the display size to fill the width. Have a look at my latest post to see the effect.
              If I save any pics to my blogsite you can access them, put them in a post and so on using the url
              I was about to start on this, with explanation, anyway, and I have figured out a way of improving the trapezium rule method dramatically.
              If you email me at howard_at_58@yahoo.co.uk I can send you stuff directly to check out.
              Gracias

              Like

    • howardat58 3:33 am on Friday, 29 August, 2014 Permalink | Reply

      That didn’t work!!!!!!!!!!!!!!!!! trying again

      I would like that. Thankyou.
      What I use is a very simple program called mathedit , which lets you put the stuff in picture form (media) and I figured out how to modify the display size to fill the width. Have a look at my latest post to see the effect.
      If I save any pics to my blogsite you can access them, put them in a post and so on using the url
      I was about to start on this, with explanation, anyway, and I have figured out a way of improving the trapezium rule method dramatically.
      If you email me at howard_at_58@yahoo.co.uk I can send you stuff directly to check out.
      Gracias

      Like

  • Joseph Nebus 4:02 pm on Friday, 22 August, 2014 Permalink | Reply
    Tags: , E,   

    Writing About E (Not By Me) 


    It’s tricky to write about e. That is, it’s not a difficult thing to write about, but it’s hard to find the audience for this number. It’s quite important, mathematically, but it hasn’t got an easy-to-understand definition like pi’s “the circumference of a circle divided by its diameter”. E’s most concise definition, I guess, is “the base of the natural logarithm”, which as an explanation to someone who hasn’t done much mathematics is only marginally more enlightening than slapping him with a slice of cold pizza. And it hasn’t got the sort of renown of something like the golden ratio which makes the number sound familiar and even welcoming.

    Still, the Mean Green Math blog (“Explaining the whys of mathematics”) has been running a series of essays explaining e, by looking at different definitions of the number. The most recent of this has been the twelfth in the series, and they seem to be arranged in chronological order under the category of Algebra II topics, and under the tag of “E” essays, although I can’t promise how long it’ll be before you have to flip through so many “older” page links on the category and tag pages that it’s harder to find that way. If I see a master page collecting all the Definitions Of E essays into one guide I’ll post that.

     
  • Joseph Nebus 5:26 pm on Friday, 9 August, 2013 Permalink | Reply
    Tags: , , ,   

    Why I Don’t Believe It’s 1/e 




    The above picture, showing the Leap-the-Dips roller coaster at Lakemont Park before its renovation, kind of answers why despite my neat reasoning and mental calculations I don’t really believe that there’s a chance of something like one in three that any particular board from the roller coaster’s original, 1902, construction is still in place. The picture — from the end of the track, if I’m not mistaken — dates to shortly before the renovation of the roller coaster began in the late 90s. Leap-the-Dips had stood without operating, and almost certainly without maintenance, from 1986 (coincidental to the park’s acquisition by the Boyer Candy company and its temporary renaming as Boyertown USA, in miniature imitation of Hershey Park) to 1998.

    The result of this period seems almost to demand replacing every board in the thing. But we don’t know that happened, and after all, surely some boards took it better than others, didn’t they? Not every board was equally exposed to the elements, or to vandalism, or to whatever does smash up wood. And there’s a lot of pieces of wood that go into a wooden roller coaster. Surely some were lucky by virtue of being in the right spot?

    (More …)

     
  • Joseph Nebus 5:00 pm on Tuesday, 6 August, 2013 Permalink | Reply
    Tags: , , , ,   

    Why I Say 1/e About This Roller Coaster 


    The Leap-The-Dips at Lakemont Park, Altoona, Pennsylvania, as photographed by Joseph Nebus in July 2013 from the edge of the launch platform.

    So in my head I worked out an estimate of about one in three that any particular board would have remained from the Leap-The-Dips’ original, 1902, configuration, even though I didn’t really believe it. Here’s how I got that figure.

    First, you have to take a guess as to how likely it is that any board is going to be replaced in any particular stretch of time. Guessing that one percent of boards need replacing per year sounded plausible, what with how neatly a chance of one-in-a-hundred fits with our base ten numbering system, and how it’s been about a hundred years in operation. So any particular board would have about a 99 percent chance of making it through any particular year. If we suppose that the chance of a board making it through the year is independent — it doesn’t change with the board’s age, or the condition of neighboring boards, or anything but the fact that a year has passed — then the chance of any particular board lasting a hundred years is going to be 0.99^{100} . That takes a little thought to work out if you haven’t got a calculator on hand.

    (More …)

     
  • Joseph Nebus 10:22 pm on Saturday, 1 December, 2012 Permalink | Reply
    Tags: , , , l'hopital, , ,   

    Quick Little Calculus Puzzle 


    fluffy, one of my friends and regular readers, got to discussing with me a couple of limit problems, particularly, ones that seemed to be solved through L’Hopital’s Rule and then ran across some that don’t call for that tool of Freshman Calculus which you maybe remember. It’s the thing about limits of zero divided by zero, or infinity divided by infinity. (It can also be applied to a couple of other “indeterminate forms”; I remember when I took this level calculus the teacher explaining there were seven such forms. Without looking them up, I think they’re \frac00, \frac{\infty}{\infty}, 0^0, \infty^{0}, 0^{\infty}, 1^{\infty}, \mbox{ and } \infty - \infty but I would not recommend trusting my memory in favor of actually studying for your test.)

    Anyway, fluffy put forth two cute little puzzles that I had immediate responses for, and then started getting plagued by doubts about, so I thought I’d put them out here for people who want the recreation. They’re both about taking the limit at zero of fractions, specifically:

    \lim_{x \rightarrow 0} \frac{e^x}{x^e}

    \lim_{x \rightarrow 0} \frac{x^e}{e^x}

    where e here is the base of the natural logarithm, that is, that number just a little high of 2.71828 that mathematicians find so interesting even though it isn’t pi.

    The limit is, if you want to be exact, a subtly and carefully defined idea that took centuries of really bright work to explain. But the first really good feeling that I really got for it is to imagine a function evaluated at the points near but not exactly at the target point — in the limits here, where x equals zero — and to see, if you keep evaluating x very near zero, are the values of your expression very near something? If it does, that thing the expression gets near is probably the limit at that point.

    So, yes, you can plug in values of x like 0.1 and 0.01 and 0.0001 and so on into \frac{e^x}{x^e} and \frac{x^e}{e^x} and get a feeling for what the limit probably is. Saying what it definitely is takes a little more work.

     
    • Geoffrey Brent (@GeoffreyBrent) 11:14 pm on Saturday, 1 December, 2012 Permalink | Reply

      Substituting x=ye makes those much easier!

      Like

      • Joseph Nebus 11:55 pm on Sunday, 2 December, 2012 Permalink | Reply

        I like the way you think.

        (I might give a couple pity points if a student had no idea what to do but tossed off a gag like that.)

        Like

    • elkement 3:34 pm on Wednesday, 12 December, 2012 Permalink | Reply

      Nearly missed this gem! I am trying to expand both nominator and denominator in Taylor’s series (and apply L’Hospital’s rule then… or is this way too sloppy?). But I haven’t figured it out yet. Am I on the right track?

      Like

      • Joseph Nebus 4:56 am on Thursday, 13 December, 2012 Permalink | Reply

        I don’t think that you need to expand it in a Taylor series — that’s too much work. But I’d recommend looking at the right-limit and the left-limit for both expressions first, and whether they are in the indeterminate forms L’Hopital’s Rule calls for.

        Like

        • elkement 3:16 pm on Thursday, 13 December, 2012 Permalink | Reply

          Yes – thanks – of course! The first one is a plain “1/0″=infinity when approached from positive x. Re Taylor’s expansion: I considered to use only terms up to certain order: If I drop anything >= O(x^2) than I end up with “1/0” again.

          Like

          • Joseph Nebus 4:39 am on Friday, 14 December, 2012 Permalink | Reply

            Yeah, that you would. Unfortunately, the Taylor expansion up to a couple terms is fine for working out a feeling for how to do the real problem, but doesn’t give you what the answer is, since the higher-order powers of x are going to be biggest when x is closest to zero, which is the region of x we’re interested in.

            The left-limit is much more difficult.

            Like

  • Joseph Nebus 3:36 am on Thursday, 15 December, 2011 Permalink | Reply
    Tags: , inequality, , manipulations, reducto ad absurdum,   

    Hopefully, Saying Something True 


    I wanted to talk about drawing graphs that represent something, and to get there have to say what kinds of things I mean to represent. The quick and expected answer is that I mean to represent some kind of equation, such as “y = 3*x – 2” or “x2 + y2 = 4”, and that probably does come up the most often. We might also be interested in representing an inequality, something like “x2 – 2 y2 ≤ 1”. On occasion we’re interested just in the region where something is not true, saying something like “y ≠ 3 – x”. (I’ve used nice small counting numbers here not out of any interest in these numbers, or because larger ones or non-whole numbers or even irrational numbers don’t work, but because there is something pleasantly reassuring about seeing a “1” or a “2” in an equation. We strongly believe we know what we mean by “1”.)

    Anyway, what we’ve written down is something describing a relationship which we are willing to suppose is true. We might not know what x or y are, and we might not care, but at least for the length of the problem we will suppose that the number represented by y must be equal to three times whatever number is represented by x and minus two. There might be only a single value of x we find interesting; there might be several; there might be infinitely many such values. There’ll be a corresponding number of y’s, at least, so long as the equation is true.

    Sometimes we’ll turn the description in terms of an equation into a description in terms of a graph right away. Some of these descriptions are like as those of a line — the “y = 3*x – 2” equation — or a simple shape — “x2 + y2 = 4” is a circle — in that we can turn them into graphs right away without having to process them, at least not once we’re familiar and comfortable with the idea of graphing. Some of these descriptions are going to be in awkward forms. “x + 2 = – y2 / x + 2 y /x” is really just an awkward way to describe a circle (more or less), but that shape is hidden in the writing.

    (More …)

     
  • Joseph Nebus 4:51 am on Tuesday, 13 December, 2011 Permalink | Reply
    Tags: , geometric problem, , , inequalities, kind of graphs, light bulb   

    Before Drawing a Graph 


    I want to talk about drawing graphs, specifically, drawing curves on graphs. We know roughly what’s meant by that: it’s about wiggly shapes with a faint rectangular grid, usually in grey or maybe drawn in dotted lines, behind them. Sometimes the wiggly shapes will be in bright colors, to clarify a complicated figure or to justify printing the textbook in color. Those graphs.

    I clarify because there is a type of math called graph theory in which, yes, you might draw graphs, but there what’s meant by a graph is just any sort of group of points, called vertices, connected by lines or curves. It makes great sense as a name, but it’s not what what someone who talks about drawing a graph means, up until graph theory gets into consideration. Those graphs are fun, particularly because they’re insensitive to exactly where the vertices are, so you get to exercise some artistic talent instead of figuring out whatever you were trying to prove in the problem.

    The ordinary kind of graphs offer some wonderful advantages. The obvious one is that they’re pictures. People can very often understand a picture of something much faster than they can understand other sorts of descriptions. This probably doesn’t need any demonstration; if it does, try looking at a map of the boundaries of South Carolina versus reading a description of its boundaries. Some problems are much easier to work out if we can approach it as a geometric problem. (And I admit feeling a particular delight when I can prove a problem geometrically; it feels cleverer.)

    (More …)

     
  • Joseph Nebus 4:27 am on Thursday, 17 November, 2011 Permalink | Reply
    Tags: , , J E D Williams, , ,   

    Descartes and the Terror of the Negative 


    When René Descartes first described the system we’ve turned into Cartesian coordinates he didn’t put it forth in quite the way we build them these days. This shouldn’t be too surprising; he lived about four centuries ago, and we have experience with the idea of matching every point on the plane to some ordered pair of numbers that he couldn’t have. The idea has been expanded on, and improved, and logical rigor I only pretend to understand laid underneath the concept. But the core remains: we put somewhere on our surface an origin point — usually this gets labelled O, mnemonic for “origin” and also suggesting the zeroes which fill its coordinates — and we pick some direction to be the x-coordinate and some direction to be the y-coordinate, and the ordered pair for a point are how far in the x-direction and how far in the y-direction one must go from the origin to get there.

    The most obvious difference between Cartesian coordinates as Descartes set them up and Cartesian coordinates as we use them is that Descartes would fill a plane with four chips, one quadrant each in the plane. The first quadrant is the points to the right of and above the origin. The second quadrant is to the left of and still above the origin. The third quadrant is to the left of and below the origin, and the fourth is to the right of the origin but below it. This division of the plane into quadrants, and even their identification as quadrants I, II, III, and IV respectively, still exists, one of those minor points on which prealgebra and algebra students briefly trip on their way to tripping over the trigonometric identities.

    Descartes had, from his perspective, excellent reason to divide the plane up this way. It’s a reason difficult to imagine today. By separating the plane like this he avoided dealing with something mathematicians of the day were still uncomfortable with. It’s easy enough to describe a point in the first quadrant as being so far to the right and so far above the origin. But a point in the second quadrant is … not any distance to the right. It’s to the left. How far to the right is something that’s to the left?

    (More …)

     
    • Donna 6:55 pm on Friday, 18 November, 2011 Permalink | Reply

      Just like double entry bookkeeping… ;)

      Like

      • nebusresearch 7:21 am on Sunday, 20 November, 2011 Permalink | Reply

        Yeah, a lot like. I wonder if anyone’s written a history of mathematics that takes it from the accountants’ perspective; a lot of driving forces did come from wanting to make accounting better. (And the idea of a negative number as a debt or obligation did a lot to make negative numbers make sense.)

        Like

  • Joseph Nebus 5:30 am on Tuesday, 15 November, 2011 Permalink | Reply
    Tags: , , n-tuple, ordered pairs, , Russell Shorto,   

    Descartes’ Flies 


    There are a healthy number of legends about René Descartes. Some of them may be true. I know the one I like is the story that this superlative mathematician, philosopher, and theologian (fields not so sharply differentiated in his time as they are today; for that matter, fields still not perfectly sharply differentiated) was so insistent on sleeping late and sufficiently ingenious in forming arguments that while a student at the Jesuit Collè Royal Henry-Le-Grand he convinced his schoolmasters to let him sleep until 11 am. Supposedly he kept to this rather civilized rising hour until he last months of his life, when he needed to tutor Queen Christina of Sweden in the earliest hours of the winter morning.

    I suppose this may be true; it’s certainly repeated often enough, and comes to mind often when I do have to wake to the alarm clock. I haven’t studied Descartes’ biography well enough to know whether to believe it, although as it makes for a charming and humanizing touch probably the whole idea is bunk and we’re fools to believe it. I’m comfortable being a little foolish. (I’ve read just the one book which might be described as even loosely biographic of Descartes — Russell Shorto’s Descartes’ Bones — and so, though I have no particular reason to doubt Shorto’s research and no question with his narrative style, suppose I am marginally worse-informed than if I were completely ignorant. It takes a cluster of books on a subject to know it.)

    Place the name “Descartes” into the conversation and a few things pop immediately into mind. Those things are mostly “I think, therefore I am”, and some attempts to compose a joke about being “before the horse”. Running up sometime after that is something called “Cartesian coordinates”, which are about the most famous kind of coordinates and the easiest way to get into the problem of describing just where something is in two- or three-dimensional space.

    (More …)

     
    • BunnyHugger 5:45 pm on Thursday, 17 November, 2011 Permalink | Reply

      For what this is worth, I have seen the story about Descartes being a late sleeper in a variety of reliable sources, and have not seen anything suggesting it is apocryphal. I can’t point to a specific source, but this isn’t one of those “stories about philosophers” that those of us in the business all regard as highly dubious (e.g. housewives setting their clocks by Kant’s walks, Hume forced to disclaim atheism in order to be pulled out of a mud puddle, Thales falling down a well).

      Like

      • nebusresearch 7:19 am on Sunday, 20 November, 2011 Permalink | Reply

        I have seen the tale of Descartes staying in bed late — not exactly the same thing as sleeping late; at least sometimes it says he stayed in bed because he thought well staring up at the ceiling — in quite a few sources and don’t have specific reason to doubt any of them. But I haven’t seen the firsthand sources for it either (I admit I haven’t looked, in this case), and I’m getting more demanding about that these days.

        Like

  • Joseph Nebus 4:21 am on Tuesday, 8 November, 2011 Permalink | Reply
    Tags: , , , , yarn   

    Can A Ball Of Yarn Threaten Three-Dimensional Space? 


    All that talk about numbering spots on the New York Thruway had a goal, that of establishing how we could set up a coordinate system for the points on a line. It turns out just as easy to do this for a curve, even one a little bit complicated like a branch of the Thruway. About the only constraints we said anything about were that we shouldn’t have branches. Lurking unstated was the idea that we didn’t have loops. For the Thruway that’s nothing exceptional; if we had a traffic circle in the middle of a high-speed limited-access highway we wouldn’t very long have a high-speed highway. Worse, we’d have some point — where the loop crosses itself — that would have two numbers describing its position. We don’t want to face that. But we’ve got this satisfying little system where we can assign unique numbers to all the points on a single line, or even a curve.

    The natural follow-up idea is whether we can set up a system where we can describe a point on a surface or even in all of space using the same sort of coordinates scheme. And there’s the obvious answer of how to do it, using two numbers to describe where something is on a surface, since that’s a two-dimensional thing; or three numbers to describe where it is in space, since that’s a three-dimensional thing. So I’m not going to talk about that just now. I want to do something more fun, the kind of thing that could do nicely in late-night conversations in the dorm lounge if undergraduates still have late-night conversations in the dorm lounge.

    If we have a long enough thread, or a strand of yarn, or whatever the quite correct term is, we know this can be set up with a coordinate system by marking off distance along that thread. We imagined doing that, more or less, with the numbering system on the Thruway and imagining the straightening out and curving and other moving around of the highway’s center line. As long as we didn’t stretch or compress the strand any, we could spread it out in any shape we liked, and have coordinates for whatever path the strand traces out.

    (More …)

     
  • Joseph Nebus 3:47 am on Thursday, 3 November, 2011 Permalink | Reply
    Tags: , , , ,   

    Searching For Infinity On The New York Thruway 


    So with several examples I’ve managed to prove what nobody really questioned, that it’s possible to imagine a complicated curve like the route of the New York Thruway and assign to all the points on it, or at least to the center line of the road, a unique number that no other point on the road has. And, more, it’s possible to assign these unique numbers in many different ways, from any lower bound we like to any upper bound we like. It’s a nice system, particularly if we’re short on numbers to tell us when we approach Loudonville.

    But I’m feeling ambitious right now and want to see how ridiculously huge, positive or negative, a number I can assign to some point on the road. Since we’d measured distances from a reference point by miles before and got a range of about 500, or by millimeters and got a range of about 800,000,000, obviously we could get to any number, however big or small, just by measuring distance using the appropriate unit: lay megaparsecs or angstroms down on the Thruway, or even use some awkward or contrived units. I want to shoot for infinitely big numbers. I’ll start by dividing the road in two.

    After all, there are two halves to the Thruway, a northern and a southern end, both arranged like upside-down u’s across the state. Instead of thinking of the center line of the whole Thruway, then, think of the center lines of the northern road and of the southern. They’re both about the same 496-mile length, but, it’d be remarkable if they were exactly the same length. Let’s suppose the northern belt is 497 miles, and the southern 495. Pretty naturally the northern belt we can give numbers from 0 to 497, based on how far they are from the south-eastern end of the road; similarly, the southern belt gets numbers from 0 to 495, from the same reference point.

    (More …)

     
  • Joseph Nebus 3:30 am on Thursday, 20 October, 2011 Permalink | Reply
    Tags: , , ,   

    Searching For 800,000,000 On The New York Thruway 


    So we’ve introduced, with maybe more words than strictly necessary, the idea that we can set up a match between the numbers from 0 to 496 and particular locations on the New York Thruway. There are a number of practical quibbles that can be brought against this scheme. For example: could we say for certain that the “outer” edge of this road, which has roughly the shape of an upside-down u, isn’t loger than the “inner” edge? We may need more numbers for the one side than the other. And the mile markers, which seemed like an acceptable scheme for noting where one was, are almost certainly only approximately located.

    But these aren’t very important. We can imagine the existence of the “ideal” Thruway, some line which runs along the median of the whole extent of the highway, so there’s no difference in length running either direction, and we can imagine measuring it to arbitrarily great precision. The actual road approximates that idealized road. And this gives what I had really wanted, a kind of number line. All the numbers from zero to 496 (or so) match a point on this ideal Thruway line, and all the points on this Thruway match some number between zero and 496. That the line wriggles all over the place and changes direction over and over, well, do we really insist that a line has to be straight?

    Well, we can at least imagine taking this “ideal” Thruway, lifting it off the globe and straightening it out, if we really want to. Here we invoke a host of assumptions even past the idea that we can move this curvy idealized road around. We assume that we can straighten it out without changing its length, for example. This isn’t too unreasonable if we imagine this curve as being something like a tangled bit of string and that we straighten it out without putting any particular tension on it; but if we imagined the idealized road as being a rubber band, held taut at the New York City and Ripley, New York, ends and pinned in place at the major turns we notice that isn’t actually guaranteed. Let’s assume we can do this straightening-out without distorting the lengths, though.

    (More …)

     
  • Joseph Nebus 2:18 am on Tuesday, 18 October, 2011 Permalink | Reply
    Tags: , New Jersey Turnpike, , , , toll booth,   

    Searching For e On The New York Thruway 


    To return to my introduction of e using the most roundabout method possible I’d like to imagine the problem of telling someone just where it is you’ve been stranded in a broken car on the New York Thruway. Actually, I’d rather imagine the problem of being stranded in a broken car on the New Jersey Turnpike, as it’s much closer to my home, but the Turnpike has a complexity I don’t want distracting this chat, so I place the action one state north. Either road will do.

    There’s too much toll road to just tell someone to find you there, and the majority of their lengths are away from any distinctive scenery, like an airport or a rest area, which would pin a location down. A gradual turn with trees on both sides is hardly distinctive. What’s needed is some fixed reference point. Fortunately, the Thruway Authority has been generous and provided more than sixty of them. These are the toll plazas: if we report that we are somewhere between exits 23 and 24, we have narrowed down our location to a six-mile stretch, which over a 496-mile road is not doing badly. We can imagine having our contact search that.

    But the toll both standard has many inconveniences. The biggest is that exits are not uniformly spaced. At the New York City end of the Thruway, before tolls start, exits can be under a mile apart; upstate, where major centers of population become sparse, they can spread out to nearly twenty miles apart. As we wait for rescue those twenty miles seem to get longer.

    (More …)

     
    • BunnyHugger 4:32 pm on Tuesday, 18 October, 2011 Permalink | Reply

      I admit I kept thinking to myself, “What, don’t they have mile markers on the Thruway?” but then you got to that at the end.

      Like

      • nebusresearch 10:13 am on Wednesday, 19 October, 2011 Permalink | Reply

        Yes, part of the trouble of this topic is I had to get past through the subject before people protested too strongly that there were mile markers the whole length already. That’s probably a good trouble to have, though. It encourages brevity.

        Like

    • Jim Ellwanger (@trainman74) 6:04 pm on Wednesday, 19 October, 2011 Permalink | Reply

      Also, in most states (per Federal recommendation), the exits are numbered to match the nearest milepost anyway.

      Like

      • nebusresearch 3:35 am on Thursday, 20 October, 2011 Permalink | Reply

        Yes, that’s why I had to rule out the Garden State Parkway from things, although the non-uniformity of spacing still turns up. Exits can be one number apart and actually be a good bit more or less than a mile apart, after all. The Thruway and Turnpike just make the case more obviously.

        Like

  • Joseph Nebus 2:19 am on Thursday, 13 October, 2011 Permalink | Reply
    Tags: , Karl Friedrich Gauss, ,   

    The Person Who Named e 


    One of the personality traits which my Dearly Beloved most often tolerates in me is my tendency toward hyperbole, a rhetorical device employed successfully on the Internet by almost four people and recognized as such as recently as 1998. I’m not satisfied saying there was an enormous, slow-moving line for a roller coaster we rode last August; I have to say that fourteen months later we’re still on that line.

    I mention this because I need to discuss one of those rare people who can be discussed accurately only in hyperbole: Leonhard Euler, 1703 – 1783. He wrote about essentially every field of mathematics it was possible to write about: calculus and geometry and physics and algebra and number theory and graph theory and logic, on music and the motions of the moon, on optics and the finding of longitude, on fluid dynamics and the frequency of prime numbers. After his death the Saint Petersburg Academy needed nearly fifty years to finish publishing his remaining work. If you ever need to fake being a mathematician, let someone else introduce the topic and then speak of how Euler’s Theorem is fundamental to it. There are several thousand Euler’s Theorems, although some of them share billing with another worthy, and most of them are fundamental to at least sixteen fields of mathematics each. I exaggerate; I must, but I note that a search for “Euler” on Wolfram Mathworld turns up 681 matches, as of this moment, out of 13,081 entries. It’s difficult to imagine other names taking up more than five percent of known mathematics. Even Karl Friedrich Gauss only matches 272 entries, and Isaac Newton a paltry 138.

    (More …)

     
    • MJ Howard (@random_bunny) 2:41 am on Thursday, 13 October, 2011 Permalink | Reply

      I was actually thinking a bit about e earlier today when I read http://www.qwantz.com/index.php?comic=2061 this morning and started thinking about using Reals as a numeric base and whether it had any use and then realized that we already sort of do with e.

      Also I checked mathworld and Erdös gets 223 results. It’s no Euler, but it’s nothing to sneeze at.

      Like

      • nebusresearch 3:45 am on Sunday, 16 October, 2011 Permalink | Reply

        I didn’t think to look up Erdös, but should have. Now I’m tempted to make a list of great names and see how much they do come up on Mathworld and whatever similar sites I can locate.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: