Tagged: differential equations Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Thursday, 18 May, 2017 Permalink | Reply
    Tags: , , differential equations, , , ,   

    Everything Interesting There Is To Say About Springs 


    I need another supplemental essay to get to the next part in Why Stuff Can Orbit. (Here’s the last part.) You probably guessed it’s about springs. They’re useful to know about. Why? That one killer Mystery Science Theater 3000 short, yes. But also because they turn up everywhere.

    Not because there are literally springs in everything. Not with the rise in anti-spring political forces. But what makes a spring is a force that pushes something back where it came from. It pushes with a force that grows just as fast as the distance from where it came grows. Most anything that’s stable, that has some normal state which it tends to look like, acts like this. A small nudging away from the normal state gets met with some resistance. A bigger nudge meets bigger resistance. And most stuff that we see is stable. If it weren’t stable it would have broken before we got there.

    (There are exceptions. Stable is, sometimes, about perspective. It can be that something is unstable but it takes so long to break that we don’t have to worry about it. Uranium, for example, is dying, turning slowly into stable elements like lead and helium. There will come a day there’s none left in the Earth. But it takes so long to break down that, barring surprises, the Earth will have broken down into something else first. And it may be that something is unstable, but it’s created by something that’s always going on. Oxygen in the atmosphere is always busy combining with other chemicals. But oxygen stays in the atmosphere because life keeps breaking it out of other chemicals.)

    Now I need to put in some terms. Start with your thing. It’s on a spring, literally or metaphorically. Don’t care. If it isn’t being pushed in any direction then it’s at rest. Or it’s at an equilibrium. I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? I can tell you what it acts like. It’s your business whether it should. Anyway, your thing has an equilibrium.

    Next term is the displacement. It’s how far your thing is from the equilibrium. If it’s really a block of wood on a spring, like it is in high school physics, this displacement is how far the spring is stretched out. In equations I’ll represent this as ‘x’ because I’m not going to go looking deep for letters for something like this. What value ‘x’ has will change with time. This is what makes it a physics problem. If we want to make clear that ‘x’ does depend on time we might write ‘x(t)’. We might go all the way and start at the top of the page with ‘x = x(t)’, just in case.

    If ‘x’ is a positive number it means your thing is displaced in one direction. If ‘x’ is a negative number it was displaced in the opposite direction. By ‘one direction’ I mean ‘to the right, or else up’. By ‘the opposite direction’ I mean ‘to the left, or else down’. Yes, you can pick any direction you like but why are you making life harder for everyone? Unless there’s something compelling about the setup of your thing that makes another choice make sense just go along with what everyone else is doing. Apply your creativity and iconoclasm where it’ll make your life better instead.

    Also, we only have to worry about one direction. This might surprise you. If you’ve played much with springs you might have noticed how they’re three-dimensional objects. You can set stuff swinging back and forth in two directions at once. That’s all right. We can describe a two-dimensional displacement as a displacement in one direction plus a displacement perpendicular to that. And if there’s no such thing as friction, they won’t interact. We can pretend they’re two problems that happen to be running on the same spring at the same time. So here I declare: we can ignore friction and pretend it doesn’t matter. We don’t have to deal with more than one direction at a time.

    (It’s not only friction. There’s problems about how energy gets transmitted between ways the thing can oscillate. This is what causes what starts out as a big whack in one direction to turn into a middling little circular wobbling. That’s a higher level physics than I want to do right now. So here I declare: we can ignore that and pretend it doesn’t matter.)

    Whether your thing is displaced or not it’s got some potential energy. This can be as large or as small as you like, down to some minimum when your thing is at equilibrium. The potential energy we represent as a number named ‘U’ because of good reasons that somebody surely had. The potential energy of a spring depends on the square of the displacement. We can write its value as ‘U = ½ k x2‘. Here ‘k’ is a number known as the spring constant. It describes how strongly the spring reacts; the bigger ‘k’ is, the more any displacement’s met with a contrary force. It’ll be a positive number. ½ is that same old one-half that you know from ideas being half-baked or going-off being half-cocked.

    Potential energy is great. If you can describe a physics problem with its energy you’re in good shape. It lets us bring physical intuition into understanding things. Imagine a bowl or a Habitrail-type ramp that’s got the cross-section of your potential energy. Drop a little marble into it. How the marble rolls? That’s what your thingy does in that potential energy.

    Also we have mathematics. Calculus, particularly differential equations, lets us work out how the position of your thing will change. We need one more piece for this. That’s the momentum of your thing. Momentum is traditionally represented with the letter ‘p’. And now here’s how stuff moves when you know the potential energy ‘U’:

    \frac{dp}{dt} = - \frac{\partial U}{\partial x}

    Let me unpack that. \frac{dp}{dt} — also known as \frac{d}{dt}p if that looks better — is “the derivative of p with respect to t”. It means “how the value of the momentum changes as the time changes”. And that is equal to minus one times …

    You might guess that \frac{\partial U}{\partial x} — also written as \frac{\partial}{\partial x} U — is some kind of derivative. The \partial looks kind of like a cursive d, after all. It’s known as the partial derivative, because it means we look at how ‘U’ changes as ‘x’ and nothing else at all changes. With the normal, ‘d’ style full derivative, we have to track how all the variables change as the ‘t’ we’re interested in changes. In this particular problem the difference doesn’t matter. But there are problems where it does matter and that’s why I’m careful about the symbols.

    So now we fall back on how to take derivatives. This gives us the equation that describes how the physics of your thing on a spring works:

    \frac{dp}{dt} = - k x

    You’re maybe underwhelmed. This is because we haven’t got any idea how the momentum ‘p’ relates to the displacement ‘x’. Well, we do, because I know and if you’re still reading at this point you know full well what momentum is. But let me make it official. Momentum is, for this kind of thing, the mass ‘m’ of your thing times how its position is changing, which is \frac{dx}{dt} . The mass of your thing isn’t changing. If you’re going to let it change then we’re doing some screwy rocket problem and that’s a different article. So its easy to get the momentum out of that problem. We get instead the second derivative of the displacement with respect to time:

    m\frac{d^2 x}{dt^2} = - kx

    Fine, then. Does that tell us anything about what ‘x(t)’ is? Not yet, but I will now share with you one of the top secrets that only real mathematicians know. We will take a guess to what the answer probably is. Then we’ll see in what circumstances that answer could possibly be right. Does this seem ad hoc? Fine, so it’s ad hoc. Here is the secret of mathematicians:

    It’s fine if you get your answer by any stupid method you like, including guessing and getting lucky, as long as you check that your answer is right.

    Oh, sure, we’d rather you get an answer systematically, since a system might give us ideas how to find answers in new problems. But if all we want is an answer then, by definition, we don’t care where it came from. Anyway, we’re making a particular guess, one that’s very good for this sort of problem. Indeed, this guess is our system. A lot of guesses at solving differential equations use exactly this guess. Are you ready for my guess about what solves this? Because here it is.

    We should expect that

    x(t) = C e^{r t}

    Here ‘C’ is some constant number, not yet known. And ‘r’ is some constant number, not yet known. ‘t’ is time. ‘e’ is that number 2.71828(etc) that always turns up in these problems. Why? Because its derivative is very easy to take, and if we have to take derivatives we want them to be easy to take. The first derivative of Ce^{rt} with respect to ‘t’ is r Ce^{rt} . The second derivative with respect to ‘t’ is r^2 Ce^{rt} . so here’s what we have:

    m r^2 Ce^{rt} = - k Ce^{rt}

    What we’d like to find are the values for ‘C’ and ‘r’ that make this equation true. It’s got to be true for every value of ‘t’, yes. But this is actually an easy equation to solve. Why? Because the C e^{rt} on the left side has to equal the C e^{rt} on the right side. As long as they’re not equal to zero and hey, what do you know? C e^{rt} can’t be zero unless ‘C’ is zero. So as long as ‘C’ is any number at all in the world except zero we can divide this ugly lump of symbols out of both sides. (If ‘C’ is zero, then this equation is 0 = 0 which is true enough, I guess.) What’s left?

    m r^2 = -k

    OK, so, we have no idea what ‘C’ is and we’re not going to have any. That’s all right. We’ll get it later. What we can get is ‘r’. You’ve probably got there already. There’s two possible answers:

    r = \pm\sqrt{-\frac{k}{m}}

    You might not like that. You remember that ‘k’ has to be positive, and if mass ‘m’ isn’t positive something’s screwed up. So what are we doing with the square root of a negative number? Yes, we’re getting imaginary numbers. Two imaginary numbers, in fact:

    r = \imath \sqrt{\frac{k}{m}}, r = - \imath \sqrt{\frac{k}{m}}

    Which is right? Both. In some combination, too. It’ll be a bit with that first ‘r’ plus a bit with that second ‘r’. In the differential equations trade this is called superposition. We’ll have information that tells us how much uses the first ‘r’ and how much uses the second.

    You might still be upset. Hey, we’ve got these imaginary numbers here describing how a spring moves and while you might not be one of those high-price physicists you see all over the media you know springs aren’t imaginary. I’ve got a couple responses to that. Some are semantic. We only call these numbers “imaginary” because when we first noticed they were useful things we didn’t know what to make of them. The label is an arbitrary thing that doesn’t make any demands of the numbers. If we had called them, oh, “Cardanic numbers” instead would you be upset that you didn’t see any Cardanos in your springs?

    My high-class semantic response is to ask in exactly what way is the “square root of minus one” any less imaginary than “three”? Can you give me a handful of three? No? Didn’t think so.

    And then the practical response is: don’t worry. Exponentials raised to imaginary numbers do something amazing. They turn into sine waves. Well, sine and cosine waves. I’ll spare you just why. You can find it by looking at the first twelve or so posts of any pop mathematics blog and its article about how amazing Euler’s Formula is. Given that Euler published, like, 2,038 books and papers through his life and the fifty years after his death it took to clear the backlog you might think, “Euler had a lot of Formulas, right? Identities too?” Yes, he did, but you’ll know this one when you see it.

    What’s important is that the displacement of your thing on a spring will be described by a function which looks like this:

    x(t) = C_1 e^{\sqrt{\frac{k}{m}} t} + C_2 e^{-\sqrt{\frac{k}{m}} t}

    for two constants, ‘C1‘ and ‘C2‘. These were the things we called ‘C’ back when we thought the answer might be Ce^{rt} ; there’s two of them because there’s two r’s. I give you my word this is equivalent to a formula like this, but you can make me show my work if you must:

    x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    for some (other) constants ‘A’ and ‘B’. Cosine and sine are the old things you remember from learning about cosine and sine.

    OK, but what are ‘A’ and ‘B’?

    Generically? We don’t care. Some numbers. Maybe zero. Maybe not. The pattern, how the displacement changes over time, will be the same whatever they are. It’ll be regular oscillation. At one time your thing will be as far from the equilibrium as it gets, and not moving toward or away from the center. At one time it’ll be back at the center and moving as fast as it can. At another time it’ll be as far away from the equilibrium as it gets, but on the other side. At another time it’ll be back at the equilibrium and moving as fast as it ever does, but the other way. How far is that maximum? What’s the fastest it travels?

    The answer’s in how we started. If we start at the equilibrium without any kind of movement we’re never going to leave the equilibrium. We have to get nudged out of it. But what kind of nudge? There’s three ways you can do to nudge something out.

    You can tug it out some and let it go from rest. This is the easiest: then ‘A’ is however big your tug was and ‘B’ is zero.

    You can let it start from equilibrium but give it a good whack so it’s moving at some initial velocity. This is the next-easiest: ‘A’ is zero, and ‘B’ is … no, not the initial velocity. You need to look at what the velocity of your thing is at the start. That’s the first derivative:

    \frac{dx}{dt} = -\sqrt{\frac{k}{m}}A sin\left(\sqrt{\frac{k}{m}} t\right) + \sqrt{\frac{k}{m}} B sin\left(\sqrt{\frac{k}{m}} t\right)

    The start is when time is zero because we don’t need to be difficult. when ‘t’ is zero the above velocity is \sqrt{\frac{k}{m}} B . So that product has to be the initial velocity. That’s not much harder.

    The third case is when you start with some displacement and some velocity. A combination of the two. Then, ugh. You have to figure out ‘A’ and ‘B’ that make both the position and the velocity work out. That’s the simultaneous solutions of equations, and not even hard equations. It’s more work is all. I’m interested in other stuff anyway.

    Because, yeah, the spring is going to wobble back and forth. What I’d like to know is how long it takes to get back where it started. How long does a cycle take? Look back at that position function, for example. That’s all we need.

    x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    Sine and cosine functions are periodic. They have a period of 2π. This means if you take the thing inside the parentheses after a sine or a cosine and increase it — or decrease it — by 2π, you’ll get the same value out. What’s the first time that the displacement and the velocity will be the same as their starting values? If they started at t = 0, then, they’re going to be back there at a time ‘T’ which makes true the equation

    \sqrt{\frac{k}{m}} T = 2\pi

    And that’s going to be

    T = 2\pi\sqrt{\frac{m}{k}}

    Maybe surprising thing about this: the period doesn’t depend at all on how big the displacement is. That’s true for perfect springs, which don’t exist in the real world. You knew that. Imagine taking a Junior Slinky from the dollar store and sticking a block of something on one end. Imagine stretching it out to 500,000 times the distance between the Earth and Jupiter and letting go. Would it act like a spring or would it break? Yeah, we know. It’s sad. Think of the animated-cartoon joy a spring like that would produce.

    But this period not depending on the displacement is true for small enough displacements, in the real world. Or for good enough springs. Or things that work enough like springs. By “true” I mean “close enough to true”. We can give that a precise mathematical definition, which turns out to be what you would mean by “close enough” in everyday English. The difference is it’ll have Greek letters included.

    So to sum up: suppose we have something that acts like a spring. Then we know qualitatively how it behaves. It oscillates back and forth in a sine wave around the equilibrium. Suppose we know what the spring constant ‘k’ is. Suppose we also know ‘m’, which represents the inertia of the thing. If it’s a real thing on a real spring it’s mass. Then we know quantitatively how it moves. It has a period, based on this spring constant and this mass. And we can say how big the oscillations are based on how big the starting displacement and velocity are. That’s everything I care about in a spring. At least until I get into something wild like several springs wired together, which I am not doing now and might never do.

    And, as we’ll see when we get back to orbits, a lot of things work close enough to springs.

    Advertisements
     
    • tkflor 8:13 pm on Saturday, 20 May, 2017 Permalink | Reply

      “I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? ”
      Why don’t you call it a “ground state”?

      Like

      • Joseph Nebus 6:49 am on Friday, 26 May, 2017 Permalink | Reply

        You’re right. This is a ground state.

        For folks just joining in, the “ground state” is what the system looks like when it’s got the least possible energy. At least the least energy consistent with it being a system at all. For a spring problem that’s the one where the thing is at rest, at the center, not displaced at all.

        In a more complicated system you can have an equilibrium that’s stable and that isn’t the ground state. That isn’t the case here, but I wonder if thinking about that didn’t make me avoid calling it a ground state.

        Like

  • Joseph Nebus 6:00 pm on Thursday, 11 May, 2017 Permalink | Reply
    Tags: differential equations, Legendre transform   

    Excuses, But Classed Up Some 


    Afraid I’m behind on resuming Why Stuff Can Orbit, mostly as a result of a power outage yesterday. It wasn’t a major one, but it did reshuffle all the week’s chores to yesterday when we could be places that had power, and kept me from doing as much typing as I wanted. I’m going to be riding this excuse for weeks.

    So instead, here, let me pass this on to you.

    It links to a post about the Legendre Transform, which is one of those cool advanced tools you get a couple years into a mathematics or physics major. It is, like many of these cool advanced tools, about solving differential equations. Differential equations turn up anytime the current state of something affects how it’s going to change, which is to say, anytime you’re looking at something not boring. It’s one of mathematics’s uses of “duals”, letting you swap between the function you’re interested in and what you know about how the function you’re interested in changes.

    On the linked page, Jonathan Manton tries to present reasons behind the Legendre transform, in ways he likes better. It might not explain the idea in a way you like, especially if you haven’t worked with it before. But I find reading multiple attempts to explain an idea helpful. Even if one perspective doesn’t help, having a cluster of ideas often does.

     
  • Joseph Nebus 6:00 pm on Sunday, 15 January, 2017 Permalink | Reply
    Tags: , , , differential equations, Dinette Set, , , , Reply All, , Slylock Fox,   

    Reading the Comics, January 14, 2017: Redeye and Reruns Edition 


    So for all I worried about the Gocomics.com redesign it’s not bad. The biggest change is it’s removed a side panel and given the space over to the comics. And while it does show comics you haven’t been reading, it only shows one per day. One week in it apparently sticks with the same comic unless you choose to dismiss that. So I’ve had it showing me The Comic Strip That Has A Finale Every Day as a strip I’m not “reading”. I’m delighted how thisbreaks the logic about what it means to “not read” an “ongoing comic strip”. (That strip was a Super-Fun-Pak Comix offering, as part of Ruben Bolling’s Tom the Dancing Bug. It was turned into a regular Gocomics.com feature by someone who got the joke.)

    Comic Strip Master Command responded to the change by sending out a lot of comic strips. I’m going to have to divide this week’s entry into two pieces. There’s not deep things to say about most of these comics, but I’ll make do, surely.

    Julie Larson’s Dinette Set rerun for the 8th is about one of the great uses of combinatorics. That use is working out how the number of possible things compares to the number of things there are. What’s always staggering is that the number of possible things grows so very very fast. Here one of Larson’s characters claims a science-type show made an assertion about the number of possible ideas a brain could hold. I don’t know if that’s inspired by some actual bit of pop science. I can imagine someone trying to estimate the number of possible states a brain might have.

    And that has to be larger than the number of atoms in the universe. Consider: there’s something less than a googol of atoms in the universe. But a person can certainly have the idea of the number 1, or the idea of the number 2, or the idea of the number 3, or so on. I admit a certain sameness seems to exist between the ideas of the numbers 2,038,412,562,593,604 and 2,038,412,582,593,604. But there is a difference. We can out-number the atoms in the universe even before we consider ideas like rabbits or liberal democracy or jellybeans or board games. The universe never had a chance.

    Or did it? Is it possible for a number to be too big for the human brain to ponder? If there are more digits in the number than there are atoms in the universe we can’t form any discrete representation of it, after all. … Except that we kind of can. For example, “the largest prime number less than one googolplex” is perfectly understandable. We can’t write it out in digits, I think. But you now have thought of that number, and while you may not know what its millionth decimal digit is, you also have no reason to care what that digit is. This is stepping into the troubled waters of algorithmic complexity.

    Shady Shrew is selling fancy bubble-making wands. Shady says the crazy-shaped wands cost more than the ordinary ones because of the crazy-shaped bubbles they create. Even though Slylock Fox has enough money to buy an expensive wand, he bought the cheaper one for Max Mouse. Why?

    Bob Weber Jr’s Slylock Fox and Comics for Kids for the 9th of January, 2017. Not sure why Shady Shrew is selling the circular wands at 50 cents. Sure, I understand wanting a triangle or star or other wand selling at a premium. But then why have the circular wands at such a cheap price? Wouldn’t it be better to put them at like six dollars, so that eight dollars for a fancy wand doesn’t seem that great an extravagance? You have to consider setting an appropriate anchor point for your customer base. But, then, Shady Shrew isn’t supposed to be that smart.

    Bob Weber Jr’s Slylock Fox and Comics for Kids for the 9th is built on soap bubbles. The link between the wand and the soap bubble vanishes quickly once the bubble breaks loose of the wand. But soap films that keep adhered to the wand or mesh can be quite strangely shaped. Soap films are a practical example of a kind of partial differential equations problem. Partial differential equations often appear when we want to talk about shapes and surfaces and materials that tug or deform the material near them. The shape of a soap bubble will be the one that minimizes the torsion stresses of the bubble’s surface. It’s a challenge to solve analytically. It’s still a good challenge to solve numerically. But you can do that most wonderful of things and solve a differential equation experimentally, if you must. It’s old-fashioned. The computer tools to do this have gotten so common it’s hard to justify going to the engineering lab and getting soapy water all over a mathematician’s fingers. But the option is there.

    Gordon Bess’s Redeye rerun from the 28th of August, 1970, is one of a string of confused-student jokes. (The strip had a Generic Comedic Western Indian setting, putting it in the vein of Hagar the Horrible and other comic-anachronism comics.) But I wonder if there are kids baffled by numbers getting made several different ways. Experience with recipes and assembly instructions and the like might train someone to thinking there’s one correct way to make something. That could build a bad intuition about what additions can work.

    'I'm never going to learn anything with Redeye as my teacher! Yesterday he told me that four and one make five! Today he said, *two* and *three* make five!'

    Gordon Bess’s Redeye rerun from the 28th of August, 1970. Reprinted the 9th of January, 2017. What makes the strip work is how it’s tied to the personalities of these kids and couldn’t be transplanted into every other comic strip with two kids in it.

    Corey Pandolph’s Barkeater Lake rerun for the 9th just name-drops algebra. And that as a word that starts with the “alj” sound. So far as I’m aware there’s not a clear etymological link between Algeria and algebra, despite both being modified Arabic words. Algebra comes from “al-jabr”, about reuniting broken things. Algeria comes from Algiers, which Wikipedia says derives from `al-jaza’ir”, “the Islands [of the Mazghanna tribe]”.

    Guy Gilchrist’s Nancy for the 9th is another mathematics-cameo strip. But it was also the first strip I ran across this week that mentioned mathematics and wasn’t a rerun. I’ll take it.

    Donna A Lewis’s Reply All for the 9th has Lizzie accuse her boyfriend of cheating by using mathematics in Scrabble. He seems to just be counting tiles, though. I think Lizzie suspects something like Blackjack card-counting is going on. Since there are only so many of each letter available knowing just how many tiles remain could maybe offer some guidance how to play? But I don’t see how. In Blackjack a player gets to decide whether to take more cards or not. Counting cards can suggest whether it’s more likely or less likely that another card will make the player or dealer bust. Scrabble doesn’t offer that choice. One has to refill up to seven tiles until the tile bag hasn’t got enough left. Perhaps I’m overlooking something; I haven’t played much Scrabble since I was a kid.

    Perhaps we can take the strip as portraying the folk belief that mathematicians get to know secret, barely-explainable advantages on ordinary folks. That itself reflects a folk belief that experts of any kind are endowed with vaguely cheating knowledge. I’ll admit being able to go up to a blackboard and write with confidence a bunch of integrals feels a bit like magic. This doesn’t help with Scrabble.

    'Want me to teach you how to add and subtract, Pokey?' 'Sure!' 'Okay ... if you had four cookies and I asked you for two, how many would you have left?' 'I'd still have four!'

    Gordon Bess’s Redeye rerun from the 29th of August, 1970. Reprinted the 10th of January, 2017. To be less snarky, I do like the simply-expressed weariness on the girl’s face. It’s hard to communicate feelings with few pen strokes.

    Gordon Bess’s Redeye continued the confused-student thread on the 29th of August, 1970. This one’s a much older joke about resisting word problems.

    Ryan North’s Dinosaur Comics rerun for the 10th talks about multiverses. If we allow there to be infinitely many possible universes that would suggest infinitely many different Shakespeares writing enormously many variations of everything. It’s an interesting variant on the monkeys-at-typewriters problem. I noticed how T-Rex put Shakespeare at typewriters too. That’ll have many of the same practical problems as monkeys-at-typewriters do, though. There’ll be a lot of variations that are just a few words or a trivial scene different from what we have, for example. Or there’ll be variants that are completely uninteresting, or so different we can barely recognize them as relevant. And that’s if it’s actually possible for there to be an alternate universe with Shakespeare writing his plays differently. That seems like it should be possible, but we lack evidence that it is.

     
  • Joseph Nebus 6:00 pm on Thursday, 5 January, 2017 Permalink | Reply
    Tags: , differential equations, , , , recap,   

    What I Learned Doing The End 2016 Mathematics A To Z 


    The slightest thing I learned in the most recent set of essays is that I somehow slid from the descriptive “End Of 2016” title to the prescriptive “End 2016” identifier for the series. My unscientific survey suggests that most people would agree that we had too much 2016 and would have been better off doing without it altogether. So it goes.

    The most important thing I learned about this is I have to pace things better. The A To Z essays have been creeping up in length. I didn’t keep close track of their lengths but I don’t think any of them came in under a thousand words. 1500 words was more common. And that’s fine enough, but at three per week, plus the Reading the Comics posts, that’s 5500 or 6000 words of mathematics alone. And that before getting to my humor blog, which even on a brief week will be a couple thousand words. I understand in retrospect why November and December felt like I didn’t have any time outside the word mines.

    I’m not bothered by writing longer essays, mind. I can apparently go on at any length on any subject. And I like the words I’ve been using. My suspicion is between these A To Zs and the Theorem Thursdays over the summer I’ve found a mode for writing pop mathematics that works for me. It’s just a matter of how to balance workloads. The humor blog has gotten consistently better readership, for the obvious reasons (lately I’ve been trying to explain what the story comics are doing), but the mathematics more satisfying. If I should have to cut back on either it’d be the humor blog that gets the cut first.

    Another little discovery is that I can swap out equations and formulas and the like for historical discussion. That’s probably a useful tradeoff for most of my readers. And it plays to my natural tendencies. It is very easy to imagine me having gone into history than into mathematics or science. It makes me aware how mediocre my knowledge of mathematics history is, though. For example, several times in the End 2016 A To Z the Crisis of Foundations came up, directly or in passing. But I’ve never read a proper history, not even a basic essay, about the Crisis. I don’t even know of a good description of this important-to-the-field event. Most mathematics history focuses around biographies of a few figures, often cribbed from Eric Temple Bell’s great but unreliable book, or a couple of famous specific incidents. (Newton versus Leibniz, the bridges of Köningsburg, Cantor’s insanity, Gödel’s citizenship exam.) Plus Bourbaki.

    That’s not enough for someone taking the subject seriously, and I do mean to. So if someone has a suggestion for good histories of, for example, how Fourier series affected mathematicians’ understanding of what functions are, I’d love to know it. Maybe I should set that as a standing open request.

    In looking over the subjects I wrote about I find a pretty strong mix of group theory and real analysis. Maybe that shouldn’t surprise. Those are two of the maybe three legs that form a mathematics major’s education. So anyone wanting to understand mathematicians would see this stuff and have questions about it. (There are more things mathematics majors learn, but there are a handful of things almost any mathematics major is sure to spend a year being baffled by.)

    The third leg, I’d say, is differential equations. That’s a fantastic field, but it’s hard to describe without equations. Also pictures of what the equations imply. I’ve tended towards essays with few equations and pictures. That’s my laziness. Equations are best written in LaTeX, a typesetting tool that might as well be the standard for mathematicians writing papers and books. While WordPress supports a bit of LaTeX it isn’t quite effortless. That comes back around to balancing my workload. I do that a little better and I can explain solving first-order differential equations by integrating factors. (This is a prank. Nobody has ever needed to solve a first-order differential equation by integrating factors except for mathematics majors being taught the method.) But maybe I could make a go of that.

    I’m not setting any particular date for the next A-To-Z, or similar, project. I need some time to recuperate. And maybe some time to think of other running projects that would be fun or educational for me. There’ll be something, though.

     
  • Joseph Nebus 6:00 pm on Friday, 4 November, 2016 Permalink | Reply
    Tags: , , , differential equations, , RPI   

    The End 2016 Mathematics A To Z: Boundary Value Problems 


    I went to a grad school, Rensselaer Polytechnic Institute. The joke at the school is that the mathematics department has two tracks, “Applied Mathematics” and “More Applied Mathematics”. So I got to know the subject of today’s A To Z very well. It’s worth your knowing too.

    Boundary Value Problems.

    I’ve talked about differential equations before. I’ll talk about them again. They’re important. They might be the most directly useful sort of higher mathematics. They turn up naturally whenever you have a system whose changes depend on the current state of things.

    There are many kinds of differential equations problems. The ones that come first to mind, and that students first learn, are “initial value problems”. In these, you’re given some system, told how it changes in time, and told what things are at a start. There’s good reasons to do that. It’s conceptually easy. It describes all sorts of systems where something moves. Think of your classic physics problems of a ball being tossed in the air, or a weight being put on a spring, or a planet orbiting a sun. These are classic initial value problems. They almost look like natural experiments. Set a thing up and watch what happens.

    They’re not everything. There’s another class of problems at least as important. Maybe more important. In these we’re given how the parts of a system affect one another. And we’re told some information about the edges of the system. The boundaries, that is. And these are “boundary value problems”.

    Mathematics majors learn them after getting thoroughly trained in and sick of initial value problems. There’s reasons for that. First is that they almost need to be about problems with multiple variables. You can set one up for, like, a ball tossed in the air. But they’re rarer. Differential equations for multiple variables are harder than differential equations for a single variable, because of course. We have to learn the tools of “partial differential equations”. In these we work out how the system changes if we pretend all but one of the variables is fixed. We combine information about all those changes for each individual changing variable. Lots more, and lots stranger, stuff can happen.

    The partial differential equation describes some region. It involves maybe some space, maybe some time, maybe both. There’s a region, called the “domain”, for which the differential equation is true.

    For example, maybe we’re interested in the amount of heat in a metal bar as it’s warmed on one end and cooled on another. The domain here is the length of the bar and the time it’s subjected to the heat and cool. Or maybe we’re interested in the amount of water flowing through a section of a river bed. The domain here is the length and width and depth of the river, if we suppose the river isn’t swelling or shrinking or changing much. Maybe we’re intersted in the electric field created by putting a bit of charge on a metal ball. Then the domain is the entire universe except the metal ball and the space inside it. We’re comfortable with boundlessly large domains.

    But what makes this a boundary value problem is that we know something about the boundary looks like. Once again a mathematics term is less baffling than you might figure. The boundary is just what it sounds like: the edge of the domain, the part that divides the domain from not-the-domain. The metal bar being heated up has boundaries on either end. The river bed has boundaries at the surface of the water, the banks of the river, and the start and the end of wherever we’re observing. The metal ball has boundaries of the ball’s surface and … uh … the limits of space and time, somewhere off infinitely far away.

    There’s all kinds of information we might get about a boundary. What we actually get is one of four kinds. The first kind is “we get told what values the solution should be at the boundary”. Mathematics majors love this because it lets us know we at least have the boundary’s values right. It’s certainly what we learn on first. And it might be most common. If we’re measuring, say, temperature or fluid speed or something like that we feel like we can know what these are. If we need a name we call this “Dirichlet Boundary Conditions”. That’s named for Peter Gustav Lejune Dirichlet. He’s one of those people mathematics majors keep running across. We get stuff named for him in mathematical physics, in probability, in heat, in Fourier series.

    The second kind is “we get told what the derivative of the solution should be at the boundary”. Mathematics majors hate this because we’re having a hard enough time solving this already and you want us to worry about the derivative of the solution on the boundary? Give us something we can check, please. But this sort of boundary condition keeps turning up. It comes up, for instance, in the electric field around a conductive metal box, or ball, or plate. The electric field will be, near the metal plate, perpendicular to the conductive metal. Goodness knows what the electric field’s value is, but we know something about how it changes. If we need a name we call this “Neumann Boundary Conditions”. This is not named for the applied mathematician/computer scientist/physicist John von Neumann. Nobody remembers the Neumann it is named for, who was Carl Neumann.

    The third kind is called “Robin boundary conditions” if someone remembers the name for it. It’s slightly named for Victor Gustave Robin. In these we don’t necessarily know the value the solution should have on the boundary. And we don’t know what the derivative of the solution on the boundary should be. But we do know some linear combination of them. That is, we know some number times the original value plus some (possibly other) number times the derivative. Mathematics majors loathe this one because the Neumann boundary conditions were hard enough and now we have this? They turn up in heat and diffusion problems, when there’s something limiting the flow of whatever you’re studying into and out of the region.

    And the last kind is called “mixed boundary conditions” as, I don’t know, nobody seems to have got their name attached to it. In this we break up the boundary. For some of it we get, say, Dirichlet boundary conditions. For some of the boundary we get, say, Neumann boundary conditions. Or maybe we have Robin boundary conditions for some of the edge and Dirichlet for others. Whatever. This mathematics majors get once or twice, as punishment for their sinful natures, and then we try never to think of them again because of the pain. Sometimes it’s the only approach that fits the problem. Still hurts.

    We see boundary value problems when we do things like blow a soap bubble using weird wireframes and ponder the shape. Or when we mix hot coffee and cold milk in a travel mug and ponder how the temperatures mix. Or when we see a pipe squeezing into narrower channels and wonder how this affects the speed of water flowing into and out of it. Often these will be problems about how stuff over a region, maybe of space and maybe of time, will settle down to some predictable, steady pattern. This is why it turns up all over applied mathematics problems, and why in grad school we got to know them so very well.

     
    • mathtuition88 2:49 am on Saturday, 5 November, 2016 Permalink | Reply

      Cool. I didn’t know it was not von Neumann behind the Neumann condition.

      Liked by 1 person

      • Joseph Nebus 5:53 am on Wednesday, 9 November, 2016 Permalink | Reply

        I know. It’s one of those things that surprises mathematics majors. We just get the last name, and everybody knows John von Neumann and makes the reasonable assumption. Mathematics history isn’t taught enough with the main subject, unfortunately.

        Liked by 1 person

    • davekingsbury 8:35 pm on Saturday, 5 November, 2016 Permalink | Reply

      I like the phrase ‘mixed boundary conditions’ … seems to describe the political situation on both sides of the pond rather well, if you’ll forgive the non-mathematical observation!

      Liked by 1 person

      • Joseph Nebus 5:54 am on Wednesday, 9 November, 2016 Permalink | Reply

        I’ll forgive it easily. There are a lot of conditions that can be considered mixed; maybe all life is.

        Liked by 2 people

    • elkement (Elke Stangl) 8:10 am on Thursday, 17 November, 2016 Permalink | Reply

      Interesting – I did not know that there is a name for mixed boundary conditions … other than calling them a mixture of Neumann and Diriclet conditions or something! I encountered these three types of conditions for the first time when learning about how to solve Maxwell’s equations, e.g. determining distributions of charges.
      As for poor unknown Neumann who happenend to have a well-known name: Lorenz of Lorenz gauge fame comes to mind … who had the misfortune that his name just differed by just one t from famous Lorentz (of respective covariance fame), so that the gauge is often called Lorentz gauge as this makes ‘unfortunately’ perfect sense.

      Like

      • Joseph Nebus 3:57 am on Sunday, 20 November, 2016 Permalink | Reply

        I’m a little surprised no skilled mathematician has been able to attach her name to mixed boundary conditions. But probably there’s no good way to attack that class of problems with the sort of all-purpose analytic insight that Dirichlet or Neumann or Robin could manage. The only hope would be for someone to come up with such a perfect explanation of how to study the problem that her name got attached to the kind of equations.

        Ah, now, Lorenz … that’s a name that I’ve known in the past I’ve merged into Lorentz, and tried to pay attention to make sure I wouldn’t keep doing that. And your mentioning him reminds me that I had forgotten and let his identity blend in where it doesn’t belong. Maybe we need to start referring to Lorenz and Lorentz so consistently with their first names that we can’t do that anymore.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Friday, 7 October, 2016 Permalink | Reply
    Tags: , , differential equations,   

    How Differential Calculus Works 


    I’m at a point in my Why Stuff Can Orbit essays where I need to talk about derivatives. If I don’t then I’m going to be stuck with qualitative talk that’s too hard to focus on anything. But it’s a big subject. It’s about two-fifths of a Freshman Calculus course and one of the two main legs of calculus. So I want to pull out a quick essay explaining the important stuff.

    Derivatives, also called differentials, are about how things change. By “things” I mean “functions”. And particularly I mean functions which have as a domain that’s in the real numbers and a range that’s in the real numbers. That’s the starting point. We can define derivatives for functions with domains or ranges that are vectors of real numbers, or that are complex-valued numbers, or are even more exotic kinds of numbers. But those ideas are all built on the derivatives for real-valued functions. So if you get really good at them, everything else is easy.

    Derivatives are the part of calculus where we study how functions change. They’re blessed with some easy-to-visualize interpretations. Suppose you have a function that describes where something is at a given time. Then its derivative with respect to time is the thing’s velocity. You can build a pretty good intuitive feeling for derivatives that way. I won’t stop you from it.

    A function’s derivative is itself a function. The derivative doesn’t have to be constant, although it’s possible. Sometimes the derivative is positive. This means the original function increases as whatever its variable increases. Sometimes the derivative is negative. This means the original function decreases as its variable increases. If the derivative is a big number this means it’s increasing or decreasing fast. If the derivative is a small number it’s increasing or decreasing slowly. If the derivative is zero then the function isn’t increasing or decreasing, at least not at this particular value of the variable. This might be a local maximum, where the function’s got a larger value than it has anywhere else nearby. It might be a local minimum, where the function’s got a smaller value than it has anywhere else nearby. Or it might just be a little pause in the growth or shrinkage of the function. No telling, at least not just from knowing the derivative is zero.

    Derivatives tell you something about where the function is going. We can, and in practice often do, make approximations to a function that are built on derivatives. Many interesting functions are real pains to calculate. Derivatives let us make polynomials that are as good an approximation as we need and that a computer can do for us instead.

    Derivatives can change rapidly. That’s all right. They’re functions in their own right. Sometimes they change more rapidly and more extremely than the original function did. Sometimes they don’t. Depends what the original function was like.

    Not every function has a derivative. Functions that have derivatives don’t necessarily have them at every point. A function has to be continuous to have a derivative. A function that jumps all over the place doesn’t really have a direction, not that differential calculus will let us suss out. But being continuous isn’t enough. The function also has to be … well, we call it “differentiable”. The word at least tells us what the property is good for, even if it doesn’t say how to tell if a function is differentiable. The function has to not have any corners, points where the direction suddenly and unpredictably changes.

    Otherwise differentiable functions can have some corners, some non-differentiable points. For instance the height of a bouncing ball, over time, looks like a bunch of upside-down U-like shapes that suddenly rebound off the floor and go up again. Those points interrupting the upside-down U’s aren’t differentiable, even though the rest of the curve is.

    It’s possible to make functions that are nothing but corners and that aren’t differentiable anywhere. These are done mostly to win quarrels with 19th century mathematicians about the nature of the real numbers. We use them in Real Analysis to blow students’ minds, and occasionally after that to give a fanciful idea a hard test. In doing real-world physics we usually don’t have to think about them.

    So why do we do them? Because they tell us how functions change. They can tell us where functions momentarily don’t change, which are usually important. And equations with differentials in them, known in the trade as “differential equations”. They’re also known to mathematics majors as “diffy Q’s”, a name which delights everyone. Diffy Q’s let us describe physical systems where there’s any kind of feedback. If something interacts with its surroundings, that interaction’s probably described by differential equations.

    So how do we do them? We start with a function that we’ll call ‘f’ because we’re not wasting more of our lives figuring out names for functions and we’re feeling smugly confident Turkish authorities aren’t interested in us.

    f’s domain is real numbers, for example, the one we’ll call ‘x’. Its range is also real numbers. f(x) is some number in that range. It’s the one that the function f matches with the domain’s value of x. We read f(x) aloud as “the value of f evaluated at the number x”, if we’re pretending or trying to scare Freshman Calculus classes. Really we say “f of x” or maybe “f at x”.

    There’s a couple ways to write down the derivative of f. First, for example, we say “the derivative of f with respect to x”. By that we mean how does the value of f(x) change if there’s a small change in x. That difference matters if we have a function that depends on more than one variable, if we had an “f(x, y)”. Having several variables for f changes stuff. Mostly it makes the work longer but not harder. We don’t need that here.

    But there’s a couple ways to write this derivative. The best one if it’s a function of one variable is to put a little ‘ mark: f'(x). We pronounce that “f-prime of x”. That’s good enough if there’s only the one variable to worry about. We have other symbols. One we use a lot in doing stuff with differentials looks for all the world like a fraction. We spend a lot of time in differential calculus explaining why it isn’t a fraction, and then in integral calculus we do some shady stuff that treats it like it is. But that’s \frac{df}{dx} . This also appears as \frac{d}{dx} f . If you encounter this do not cross out the d’s from the numerator and denominator. In this context the d’s are not numbers. They’re actually operators, which is what we call a function whose domain is functions. They’re notational quirks is all. Accept them and move on.

    How do we calculate it? In Freshman Calculus we introduce it with this expression involving limits and fractions and delta-symbols (the Greek letter that looks like a triangle). We do that to scare off students who aren’t taking this stuff seriously. Also we use that formal proper definition to prove some simple rules work. Then we use those simple rules. Here’s some of them.

    1. The derivative of something that doesn’t change is 0.
    2. The derivative of xn, where n is any constant real number — positive, negative, whole, a fraction, rational, irrational, whatever — is n xn-1.
    3. If f and g are both functions and have derivatives, then the derivative of the function f plus g is the derivative of f plus the derivative of g. This probably has some proper name but it’s used so much it kind of turns invisible. I dunno. I’d call it something about linearity because “linearity” is what happens when adding stuff works like adding numbers does.
    4. If f and g are both functions and have derivatives, then the derivative of the function f times g is the derivative of f times the original g plus the original f times the derivative of g. This is called the Product Rule.
    5. If f and g are both function and have derivatives we might do something called composing them. That is, for a given x we find the value of g(x), and then we put that value in to f. That we write as f(g(x)). For example, “the sine of the square of x”. Well, the derivative of that with respect to x is the derivative of f evaluated at g(x) times the derivative of g. That is, f'(g(x)) times g'(x). This is called the Chain Rule and believe it or not this turns up all the time.
    6. If f is a function and C is some constant number, then the derivative of C times f(x) is just C times the derivative of f(x), that is, C times f'(x). You don’t really need this as a separate rule. If you know the Product Rule and that first one about the derivatives of things that don’t change then this follows. But who wants to go through that many steps for a measly little result like this?
    7. Calculus textbooks say there’s this Quotient Rule for when you divide f by g but you know what? The one time you’re going to need it it’s easier to work out by the Product Rule and the Chain Rule and your life is too short for the Quotient Rule. Srsly.
    8. There’s some functions with special rules for the derivative. You can work them out from the Real And Proper Definition. Or you can work them out from infinitely-long polynomial approximations that somehow make sense in context. But what you really do is just remember the ones anyone actually uses. The derivative of ex is ex and we love it for that. That’s why e is that 2.71828(etc) number and not something else. The derivative of the natural log of x is 1/x. That’s what makes it natural. The derivative of the sine of x is the cosine of x. That’s if x is measured in radians, which it always is in calculus, because it makes that work. The derivative of the cosine of x is minus the sine of x. Again, radians. The derivative of the tangent of x is um the square of the secant of x? I dunno, look that sucker up. The derivative of the arctangent of x is \frac{1}{1 + x^2} and you know what? The calculus book lists this in a table in the inside covers. Just use that. Arctangent. Sheesh. We just made up the “secant” and the “cosecant” to haze Freshman Calculus students anyway. We don’t get a lot of fun. Let us have this. Or we’ll make you hear of the inverse hyperbolic cosecant.

    So. What’s this all mean for central force problems? Well, here’s what the effective potential energy V(r) usually looks like:

    V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}

    So, first thing. The variable here is ‘r’. The variable name doesn’t matter. It’s just whatever name is convenient and reminds us somehow why we’re talking about this number. We adapt our rules about derivatives with respect to ‘x’ to this new name by striking out ‘x’ wherever it appeared and writing in ‘r’ instead.

    Second thing. C is a constant. So is n. So is L and so is m. 2 is also a constant, which you never think about until a mathematician brings it up. That second term is a little complicated in form. But what it means is “a constant, which happens to be the number \frac{L^2}{2m} , multiplied by r-2”.

    So the derivative of Veff, with respect to r, is the derivative of the first term plus the derivative of the second. The derivatives of each of those terms is going to be some constant time r to another number. And that’s going to be:

    V'_{eff}(r) = C n r^{n - 1} - 2 \frac{L^2}{2 m r^3}

    And there you can divide the 2 in the numerator and the 2 in the denominator. So we could make this look a little simpler yet, but don’t worry about that.

    OK. So that’s what you need to know about differential calculus to understand orbital mechanics. Not everything. Just what we need for the next bit.

     
    • davekingsbury 1:06 pm on Saturday, 8 October, 2016 Permalink | Reply

      Good article. Just finished Morton Cohen’s biography of Lewis Carroll, who was a great populariser of mathematics, logic, etc. Started a shared poem in tribute to him, here is a cheeky plug, hope you don’t mind!

      https://davekingsbury.wordpress.com/2016/10/08/web-of-life/

      Like

      • Joseph Nebus 12:22 am on Tuesday, 11 October, 2016 Permalink | Reply

        Thanks for sharing and I’m quite happy to have your plug here. I know about Carroll’s mathematics-popularization side; his logic puzzles are particularly choice ones even today. (Granting that deductive logic really lends itself to being funny.)

        Oddly I haven’t read a proper biography of Carroll, or most of the other mathematicians I’m interested in. Which is strange since I’m so very interested in the history and the cultural development of mathematics.

        Liked by 1 person

  • Joseph Nebus 3:00 pm on Saturday, 25 June, 2016 Permalink | Reply
    Tags: , branding, differential equations, , , ,   

    Reading the Comics, June 25, 2016: Busy Week Edition 


    I had meant to cut the Reading The Comics posts back to a reasonable one a week. Then came the 23rd, which had something like six hundred mathematically-themed comic strips. So I could post another impossibly long article on Sunday or split what I have. And splitting works better for my posting count, so, here we are.

    Charles Brubaker’s Ask A Cat for the 19th is a soap-bubbles strip. As ever happens with comic strips, the cat blows bubbles that can’t happen without wireframes and skillful bubble-blowing artistry. It happens that a few days ago I linked to a couple essays showing off some magnificent surfaces that the right wireframe boundary might inspire. The mathematics describing how a soap bubbles’s shape should be made aren’t hard; I’m confident I could’ve understood the equations as an undergraduate. Finding exact solutions … I’m not sure I could have done. (I’d still want someone looking over my work if I did them today.) But numerical solutions, that I’d be confident in doing. And the real thing is available when you’re ready to get your hands … dirty … with soapy water.

    Rick Stromoski’s Soup To Nutz for the 19th Shows RoyBoy on the brink of understanding symmetry. To lose at rock-paper-scissors is indeed just as hard as winning is. Suppose we replaced the names of the things thrown with letters. Suppose we replace ‘beats’ and ‘loses to’ with nonsense words. Then we could describe the game: A flobs B. B flobs C. C flobs A. A dostks C. C dostks B. B dostks A. There’s no way to tell, from this, whether A is rock or paper or scissors, or whether ‘flob’ or ‘dostk’ is a win.

    Bill Whitehead’s Free Range for the 20th is the old joke about tipping being the hardest kind of mathematics to do. Proof? There’s an enormous blackboard full of symbols and the three guys in lab coats are still having trouble with it. I have long wondered why tips are used as the model of impossibly difficult things to compute that aren’t taxes. I suppose the idea of taking “fifteen percent” (or twenty, or whatever) of something suggests a need for precision. And it’ll be fifteen percent of a number chosen without any interest in making the calculation neat. So it looks like the worst possible kind of arithmetic problem. But the secret, of course, is that you don’t have to have “the” right answer. You just have to land anywhere in an acceptable range. You can work out a fraction — a sixth, a fifth, or so — of a number that’s close to the tab and you’ll be right. So, as ever, it’s important to know how to tell whether you have a correct answer before worrying about calculating it.

    Allison Barrows’s Preeteena rerun for the 20th is your cheerleading geometry joke for this week.

    'I refuse to change my personality just for a stupid speech.' 'Fi, you wouldn't have to! In fact, make it an asset! Brand yourself as The Math Curmudgeon! ... The Grouchy Grapher ... The Sour Cosine ... The Number Grump ... The Count of Carping ... The Kvetching Quotient' 'I GET IT!'

    Bill Holbrook’s On The Fastrack for the 22nd of June, 2016. There are so many bloggers wondering if Holbrook is talking about them.

    I am sure Bill Holbrook’s On The Fastrack for the 22nd is not aimed at me. He hangs around Usenet group rec.arts.comics.strips some, as I do, and we’ve communicated a bit that way. But I can’t imagine he thinks of me much or even at all once he’s done with racs for the day. Anyway, Dethany does point out how a clear identity helps one communicate mathematics well. (Fi is to talk with elementary school girls about mathematics careers.) And bitterness is always a well-received pose. Me, I’m aware that my pop-mathematics brand identity is best described as “I guess he writes a couple things a week, doesn’t he?” and I could probably use some stronger hook, somewhere. I just don’t feel curmudgeonly most of the time.

    Darby Conley’s Get Fuzzy rerun for the 22nd is about arithmetic as a way to be obscure. We’ve all been there. I had, at first, read Bucky’s rating as “out of 178 1/3 π” and thought, well, that’s not too bad since one-third of π is pretty close to 1. But then, Conley being a normal person, probably meant “one-hundred seventy-eight and a third”, and π times that is a mess. Well, it’s somewhere around 550 or so. Octave tells me it’s more like 560.251 and so on.

     
  • Joseph Nebus 3:00 pm on Friday, 1 April, 2016 Permalink | Reply
    Tags: , differential equations, , , sizes,   

    A Leap Day 2016 Mathematics A To Z: Orthonormal 


    Jacob Kanev had requested “orthogonal” for this glossary. I’d be happy to oblige. But I used the word in last summer’s Mathematics A To Z. And I admit I’m tempted to just reprint that essay, since it would save some needed time. But I can do something more.

    Orthonormal.

    “Orthogonal” is another word for “perpendicular”. Mathematicians use it for reasons I’m not precisely sure of. My belief is that it’s because “perpendicular” sounds like we’re talking about directions. And we want to extend the idea to things that aren’t necessarily directions. As majors, mathematicians learn orthogonality for vectors, things pointing in different directions. Then we extend it to other ideas. To functions, particularly, but we can also define it for spaces and for other stuff.

    I was vague, last summer, about how we do that. We do it by creating a function called the “inner product”. That takes in two of whatever things we’re measuring and gives us a real number. If the inner product of two things is zero, then the two things are orthogonal.

    The first example mathematics majors learn of this, before they even hear the words “inner product”, are dot products. These are for vectors, ordered sets of numbers. The dot product we find by matching up numbers in the corresponding slots for the two vectors, multiplying them together, and then adding up the products. For example. Give me the vector with values (1, 2, 3), and the other vector with values (-6, 5, -4). The inner product will be 1 times -6 (which is -6) plus 2 times 5 (which is 10) plus 3 times -4 (which is -12). So that’s -6 + 10 – 12 or -8.

    So those vectors aren’t orthogonal. But how about the vectors (1, -1, 0) and (0, 0, 1)? Their dot product is 1 times 0 (which is 0) plus -1 times 0 (which is 0) plus 0 times 1 (which is 0). The vectors are perpendicular. And if you tried drawing this you’d see, yeah, they are. The first vector we’d draw as being inside a flat plane, and the second vector as pointing up, through that plane, like a thumbtack.

    So that’s orthogonal. What about this orthonormal stuff?

    Well … the inner product can tell us something besides orthogonality. What happens if we take the inner product of a vector with itself? Say, (1, 2, 3) with itself? That’s going to be 1 times 1 (which is 1) plus 2 times 2 (4, according to rumor) plus 3 times 3 (which is 9). That’s 14, a tidy sum, although, so what?

    The inner product of (-6, 5, -4) with itself? Oh, that’s some ugly numbers. Let’s skip it. How about the inner product of (1, -1, 0) with itself? That’ll be 1 times 1 (which is 1) plus -1 times -1 (which is positive 1) plus 0 times 0 (which is 0). That adds up to 2. And now, wait a minute. This might be something.

    Start from somewhere. Move 1 unit to the east. (Don’t care what the unit is. Inches, kilometers, astronomical units, anything.) Then move -1 units to the north, or like normal people would say, 1 unit o the south. How far are you from the starting point? … Well, you’re the square root of 2 units away.

    Now imagine starting from somewhere and moving 1 unit east, and then 2 units north, and then 3 units straight up, because you found a convenient elevator. How far are you from the starting point? This may take a moment of fiddling around with the Pythagorean theorem. But you’re the square root of 14 units away.

    And what the heck, (0, 0, 1). The inner product of that with itself is 0 times 0 (which is zero) plus 0 times 0 (still zero) plus 1 times 1 (which is 1). That adds up to 1. And, yeah, if we go one unit straight up, we’re one unit away from where we started.

    The inner product of a vector with itself gives us the square of the vector’s length. At least if we aren’t using some freak definition of inner products and lengths and vectors. And this is great! It means we can talk about the length — maybe better to say the size — of things that maybe don’t have obvious sizes.

    Some stuff will have convenient sizes. For example, they’ll have size 1. The vector (0, 0, 1) was one such. So is (1, 0, 0). And you can think of another example easily. Yes, it’s \left(\frac{1}{\sqrt{2}}, -\frac{1}{2}, \frac{1}{2}\right) . (Go ahead, check!)

    So by “orthonormal” we mean a collection of things that are orthogonal to each other, and that themselves are all of size 1. It’s a description of both what things are by themselves and how they relate to one another. A thing can’t be orthonormal by itself, for the same reason a line can’t be perpendicular to nothing in particular. But a pair of things might be orthogonal, and they might be the right length to be orthonormal too.

    Why do this? Well, the same reasons we always do this. We can impose something like direction onto a problem. We might be able to break up a problem into simpler problems, one in each direction. We might at least be able to simplify the ways different directions are entangled. We might be able to write a problem’s solution as the sum of solutions to a standard set of representative simple problems. This one turns up all the time. And an orthogonal set of something is often a really good choice of a standard set of representative problems.

    This sort of thing turns up a lot when solving differential equations. And those often turn up when we want to describe things that happen in the real world. So a good number of mathematicians develop a habit of looking for orthonormal sets.

     
    • howardat58 6:16 pm on Friday, 1 April, 2016 Permalink | Reply

      So how about “Fourier Series” (next year?).

      Liked by 1 person

      • Joseph Nebus 6:23 pm on Monday, 4 April, 2016 Permalink | Reply

        I might. They’re awfully interesting stuff and probably do fit in the sweet spot of being just explainable enough without too many equations for the A To Z format.

        Like

  • Joseph Nebus 3:00 pm on Friday, 6 November, 2015 Permalink | Reply
    Tags: , differential equations, , , , ,   

    The Set Tour, Part 7: Matrices 


    I feel a bit odd about this week’s guest in the Set Tour. I’ve been mostly concentrating on sets that get used as the domains or ranges for functions a lot. The ones I want to talk about here don’t tend to serve the role of domain or range. But they are used a great deal in some interesting functions. So I loosen my rule about what to talk about.

    Rm x n and Cm x n

    Rm x n might explain itself by this point. If it doesn’t, then this may help: the “x” here is the multiplication symbol. “m” and “n” are positive whole numbers. They might be the same number; they might be different. So, are we done here?

    Maybe not quite. I was fibbing a little when I said “x” was the multiplication symbol. R2 x 3 is not a longer way of saying R6, an ordered collection of six real-valued numbers. The x does represent a kind of product, though. What we mean by R2 x 3 is an ordered collection, two rows by three columns, of real-valued numbers. Say the “x” here aloud as “by” and you’re pronouncing it correctly.

    What we get is called a “matrix”. If we put into it only real-valued numbers, it’s a “real matrix”, or a “matrix of reals”. Sometimes mathematical terminology isn’t so hard to follow. Just as with vectors, Rn, it matters just how the numbers are organized. R2 x 3 means something completely different from what R3 x 2 means. And swapping which positions the numbers in the matrix occupy changes what matrix we have, as you might expect.

    You can add together matrices, exactly as you can add together vectors. The same rules even apply. You can only add together two matrices of the same size. They have to have the same number of rows and the same number of columns. You add them by adding together the numbers in the corresponding slots. It’s exactly what you would do if you went in without preconceptions.

    You can also multiply a matrix by a single number. We called this scalar multiplication back when we were working with vectors. With matrices, we call this scalar multiplication. If it strikes you that we could see vectors as a kind of matrix, yes, we can. Sometimes that’s wise. We can see a vector as a matrix in the set R1 x n or as one in the set Rn x 1, depending on just what we mean to do.

    It’s trickier to multiply two matrices together. As with vectors multiplying the numbers in corresponding positions together doesn’t give us anything. What we do instead is a time-consuming but not actually hard process. But according to its rules, something in Rm x n we can multiply by something in Rn x k. “k” is another whole number. The second thing has to have exactly as many rows as the first thing has columns. What we get is a matrix in Rm x k.

    I grant you maybe didn’t see that coming. Also a potential complication: if you can multiply something in Rm x n by something in Rn x k, can you multiply the thing in Rn x k by the thing in Rm x n? … No, not unless k and m are the same number. Even if they are, you can’t count on getting the same product. Matrices are weird things this way. They’re also gateways to weirder things. But it is a productive weirdness, and I’ll explain why in a few paragraphs.

    A matrix is a way of organizing terms. Those terms can be anything. Real matrices are surely the most common kind of matrix, at least in mathematical usage. Next in common use would be complex-valued matrices, much like how we get complex-valued vectors. These are written Cm x n. A complex-valued matrix is different from a real-valued matrix. The terms inside the matrix can be complex-valued numbers, instead of real-valued numbers. Again, sometimes, these mathematical terms aren’t so tricky.

    I’ve heard occasionally of people organizing matrices of other sets. The notation is similar. If you’re building a matrix of “m” rows and “n” columns out of the things you find inside a set we’ll call H, then you write that as Hm x n. I’m not saying you should do this, just that if you need to, that’s how to tell people what you’re doing.

    Now. We don’t really have a lot of functions that use matrices as domains, and I can think of fewer that use matrices as ranges. There are a couple of valuable ones, ones so valuable they get special names like “eigenvalue” and “eigenvector”. (Don’t worry about what those are.) They take in Rm x n or Cm x n and return a set of real- or complex-valued numbers, or real- or complex-valued vectors. Not even those, actually. Eigenvectors and eigenfunctions are only meaningful if there are exactly as many rows as columns. That is, for Rm x m and Cm x m. These are known as “square” matrices, just as you might guess if you were shaken awake and ordered to say what you guessed a “square matrix” might be.

    They’re important functions. There are some other important functions, with names like “rank” and “condition number” and the like. But they’re not many. I believe they’re not even thought of as functions, any more than we think of “the length of a vector” as primarily a function. They’re just properties of these matrices, that’s all.

    So why are they worth knowing? Besides the joy that comes of knowing something, I mean?

    Here’s one answer, and the one that I find most compelling. There is cultural bias in this: I come from an applications-heavy mathematical heritage. We like differential equations, which study how stuff changes in time and in space. It’s very easy to go from differential equations to ordered sets of equations. The first equation may describe how the position of particle 1 changes in time. It might describe how the velocity of the fluid moving past point 1 changes in time. It might describe how the temperature measured by sensor 1 changes as it moves. It doesn’t matter. We get a set of these equations together and we have a majestic set of differential equations.

    Now, the dirty little secret of differential equations: we can’t solve them. Most interesting physical phenomena are nonlinear. Linear stuff is easy. Small change 1 has effect A; small change 2 has effect B. If we make small change 1 and small change 2 together, this has effect A plus B. Nonlinear stuff, though … it just doesn’t work. Small change 1 has effect A; small change 2 has effect B. Small change 1 and small change 2 together has effect … A plus B plus some weird A times B thing plus some effect C that nobody saw coming and then C does something with A and B and now maybe we’d best hide.

    There are some nonlinear differential equations we can solve. Those are the result of heroic work and brilliant insights. Compared to all the things we would like to solve there’s not many of them. Methods to solve nonlinear differential equations are as precious as ways to slay krakens.

    But here’s what we can do. What we usually like to know about in systems are equilibriums. Those are the conditions in which the system stops changing. Those are interesting. We can usually find those points by boring but not conceptually challenging calculations. If we can’t, we can declare x0 represents the equilibrium. If we still care, we leave calculating its actual values to the interested reader or hungry grad student.

    But what’s really interesting is: what happens if we’re near but not exactly at the equilibrium? Sometimes, we stay near it. Think of pushing a swing. However good a push you give, it’s going to settle back to the boring old equilibrium of dangling straight down. Sometimes, we go racing away from it. Think of trying to balance a pencil on its tip; if we did this perfectly it would stay balanced. It never does. We’re never perfect, or there’s some wind or somebody walks by and the perfect balance is foiled. It falls down and doesn’t bounce back up. Sometimes, whether it it stays near or goes away depends on what way it’s away from the equilibrium.

    And now we finally get back to matrices. Suppose we are starting out near an equilibrium. We can, usually, approximate the differential equations that describe what will happen. The approximation may only be good if we’re just a tiny bit away from the equilibrium, but that might be all we really want to know. That approximation will be some linear differential equations. (If they’re not, then we’re just wasting our time.) And that system of linear differential equations we can describe using matrices.

    If we can write what we are interested in as a set of linear differential equations, then we have won. We can use the many powerful tools of matrix arithmetic — linear algebra, specifically — to tell us everything we want to know about the system. We can say whether a small push away from the equilibrium stays small, or whether it grows, or whether it depends. We can say how fast the small push shrinks, or grows (for a while). We can say how the system will change, approximately.

    This is what I love in matrices. It’s not everything there is to them. But it’s enough to make matrices important to me.

     
  • Joseph Nebus 10:57 pm on Wednesday, 29 July, 2015 Permalink | Reply
    Tags: differential equations, meshes, ,   

    Reading the Comics, July 29, 2015: Not Entirely Reruns Edition 


    Zach Weinersmith’s Saturday Morning Breakfast Cereal (July 25) gets its scheduled appearance here with a properly formed Venn Diagram joke. I’m unqualified to speak for rap musicians. When mathematicians speak of something being “for reals” they mean they’re speaking about a variable that might be any of the real numbers. This is as opposed to limiting the variable to being some rational or irrational number, or being a whole number. It’s also as opposed to letting the variable be some complex-valued number, or some more exotic kind of number. It’s a way of saying what kind of thing we want to find true statements about.

    I don’t know when the Saturday Morning Breakfast Cereal first ran, but I know I’ve seen it appear in my Twitter feed. I believe all the Gocomics.com postings of this strip are reruns, but I haven’t read the strip long enough to say.

    Steve Sicula’s Home And Away (July 26) is built on the joke of kids wise to mathematics during summer vacation. I don’t think this is a rerun, although we’ve seen the joke this summer before.

    An angel with a square halo explains he was good^2.

    Daniel Beyer’s Offbeat Comics for the 27th of July, 2015.

    Daniel Beyer’s Offbeat Comics (July 27) depicts an angel with a square halo because “I was good2.” The association between squaring a number and squares goes back a long time. Well, it’s right there in the name, isn’t it? Florian Cajori’s A History Of Mathematical Notations cites the term “latus” and the abbreviation “l” to represent the side of a square being used by the Roman surveyor Junius Nipsus in the second century; for centuries this would be as good a term as anyone had for the thing to be calculated. (Res, meaning “thing”, was also popular.) Once you’ve taken the idea of calculating based on the length of a square, the jump to “square” for “length times itself” seems like a tiny one. But Cajori doesn’t seem to have examples of that being written until the 16th century.

    The square of the quantity you’re interested in might be written q, for quadratus. The cube would be c, for cubus. The fourth power would be b or bq, for biquadratus, and so on. This is tolerable if you only have to work with a single unknown quantity, but the notation turns into gibberish the moment you want two variables in the mix. So it collapsed in the 17th century, replaced by the familiar x2 and x3 and so on. Many authors developed notations close to this: James Hume would write xii or xiii; Pierre Hérigone x2 or x3, all in one line. Rene Descartes would write x2 or x3 or so, and many, many followed him. Still, quite a few people — including Rene Descartes, Isaac Newton, and even as late a figure as Carl Gauss, in the early 19th century — would resist “x2”. They’d prefer “xx”. Gauss defended this on the grounds that “x2” takes up just as much space as “xx” and so fails the biggest point of having notation.

    Corey Pandolph’s Toby, Robot Satan (July 27, rerun) uses sudoku as an example of the logic and reasoning problems that one would expect a robot should be able to do. It is weird to encounter one that’s helpless before them.

    Cory Thomas’s Watch Your Head (July 27, rerun from 2007) mentions “Chebyshev grids” and “infinite boundaries” as things someone doing mathematics on the computer would do. And it does so correctly. Differential equations describe how things change on some domain over space and time. They can be very hard to solve exactly, but can be put on the computer very well. For this, we pick a representative set of points which we call a mesh. And we find an approximate representation of the original differential question, which we call a discretization or a difference equation. We can then solve this difference equation on the mesh, and if we’ve done our work right, this approximation will let us get a good estimate for the solution to the original problem over the whole original domain.

    A Chebyshev grid is a particular arrangement of mesh points. It’s not uniform; it tends to clump up, becoming more common near the ends of the boundary. This is useful if you have reason to expect that the boundaries are more interesting than the middle of the domain. There’s no sense wasting good computing power calculating boring stuff. The mesh is named for Pafnuty Chebyshev, a 19th Century Russian mathematician whose name is all over mathematics. Unfortunately since he was a 19th Century Russian mathematician, his name is transcribed into English all sorts of ways. Chebyshev seems to be most common today, though Tchebychev used to be quite popular, which is why polynomials of his might be abbreviated as T. There are many alternatives.

    Ah, but how do you represent infinite boundaries with the finitely many points of any calculatable mesh? There are many approaches. One is to just draw a really wide mesh and trust that all the action is happening near the center so omitting the very farthest things doesn’t hurt too much. Or you might figure what the average of things far away is, and make a finite boundary that has whatever that value is. Another approach is to make the boundaries repeating: go far enough to the right and you loop back around to the left, go far enough up and you loop back around to down. Another approach is to create a mesh that is bundled up tight around the center, but that has points which do represent going off very, very far, maybe in principle infinitely far away. You’re allowed to create meshes that don’t space points uniformly, and that even move points as you compute. That’s harder work, but it’s legitimate numerical mathematics.

    So, the mathematical work being described here is — so far as described — legitimate. I’m not competent to speak about the monkey side of the research.

    Greg Evans’s Luann Againn (July 29; rerun from July 29, 1987) name-drops the Law of Averages. There are actually multiple Laws of Averages, with slightly different assumptions and implications, but they all come to about the same meaning. You can expect that if some experiment is run repeatedly, the average value of the experiments will be close to the true value of whatever you’re measuring. An important step in proving this law was done by Pafnuty Chebyshev.

     
    • sheldonk2014 1:26 am on Thursday, 30 July, 2015 Permalink | Reply

      Bugs Bunny was 75 yesterday
      Bob Clampett was the developer,also tweety and porky pig
      Mel Blamc was the voice

      Like

      • Joseph Nebus 5:47 am on Sunday, 2 August, 2015 Permalink | Reply

        That’s … actually kind of complicated. There are a lot of characters who are more or less Bugs Bunny, with the character really coming together around 1938. It’s a good lesson for people who want to study history, really. You have to figure out what you mean by ‘the first Bugs Bunny cartoon’ and defend your choice to say anything about his origins.

        Like

  • Joseph Nebus 3:05 pm on Friday, 12 June, 2015 Permalink | Reply
    Tags: , differential equations, , , , , projection, , traffic   

    A Summer 2015 Mathematics A To Z: into 


    Into.

    The definition of “into” will call back to my A to Z piece on “bijections”. It particularly call on what mathematicians mean by a function. When a mathematician talks about a functions she means a combination three components. The first is a set called the domain. The second is a set called the range. The last is a rule that matches up things in the domain to things in the range.

    We said the function was “onto” if absolutely everything which was in the range got used. That is, if everything in the range has at least one thing in the domain that the rule matches to it. The function that has domain of -3 to 3, and range of -27 to 27, and the rule that matches a number x in the domain to the number x3 in the range is “onto”.

    (More …)

     
    • elkement 6:26 am on Sunday, 14 June, 2015 Permalink | Reply

      Here it finally shows that my math education was originally in German. I haven’t known these properties as ‘onto’ or ‘into’, but only surjective and injective – which are the same words in German. I think we don’t have such nice common language terms for that.

      Like

      • Joseph Nebus 2:27 am on Tuesday, 16 June, 2015 Permalink | Reply

        The sense I got as an undergraduate was there was a divide between (English-speaking) mathematicians who preferred onto and into and those who preferred surjective and injective. I can see the benefits of either choice. Onto and into have that nice common Anglo-Saxon touch, and common words tend to write better. But there’s no good matching common word for bijective. And the injective-surjective-bijective triplet seem to reinforce each word’s meaning and probably help all three concepts stand together better than any one alone does.

        Liked by 1 person

  • Joseph Nebus 3:04 pm on Monday, 25 May, 2015 Permalink | Reply
    Tags: , , , , differential equations, , higher mathematics,   

    A Summer 2015 Mathematics A To Z: ansatz 


    Sue Archer at the Doorway Between Worlds blog recently completed an A to Z challenge. I decided to follow her model and challenge and intend to do a little tour of some mathematical terms through the alphabet. My intent is to focus on some that are interesting terms of art that I feel non-mathematicians never hear. Or that they never hear clearly. Indeed, my first example is one I’m not sure I ever heard clearly described.

    Ansatz.

    I first encountered this term in grad school. I can’t tell you when. I just realized that every couple sessions in differential equations the professor mentioned the ansatz for this problem. By then it felt too late to ask what it was I’d missed. In hindsight I’m not sure the professor ever made it clear. My research suggests the word is still a dialect rather than part of the universal language of mathematicians, and that it isn’t quite precisely defined.

    What a mathematician means by the “ansatz” is the collection of ideas that go into solving a problem. This may be an assumption of what the solution should look like. This might be the assumptions of physical or mathematical properties a solution has to have. This might be a listing of properties that a valid solution would have to have. It could be the set of things you judge should be included, or ignored, in constructing a mathematical model of something. In short the ansatz is the set of possibly ad hoc assumptions you have to bring to a topic to make it something answerable. It’s different from the axioms of the field or the postulates for a problem. An axiom or postulate is assumed to be true by definition. The ansatz is a bunch of ideas we suppose are true because they seem likely to bring us to a solution.

    An ansatz is good for getting an answer. It doesn’t do anything to verify that the answer means anything, though. The ansatz contains assumptions you the mathematician brought to the problem. You need to argue that the assumptions are reasonable, and reflect the actual problem you’re studying. You also should prove that the answer ultimately derived matches the actual behavior of whatever you were studying. Validating a solution can be the hardest part of mathematics, other than all the other parts of mathematics.

     
    • Lynette d'Arty-Cross 3:28 pm on Monday, 25 May, 2015 Permalink | Reply

      I first ran across this word when I was learning German. One of its meanings is “formation” or “beginning.”
      Interesting post. :)

      Like

      • Joseph Nebus 2:42 am on Tuesday, 26 May, 2015 Permalink | Reply

        I’d thought it had a meaning like that. There are a good number of mathematical terms that are German in origin — well, “eigenvalue”, along with related words like “eigenvector” and “eigenfunction” are evidence of that — though I’m surprised to find one that’s in the process of becoming part of the English mathematician’s vocabulary.

        Like

        • Lynette d'Arty-Cross 4:05 am on Tuesday, 26 May, 2015 Permalink | Reply

          Interesting. Eigen means “own” in the sense that the object is separate, such as
          “my own car.” Roughly 60% of English comes from Latin via Old German, so there’s quite a common history.

          Like

          • Joseph Nebus 4:23 am on Tuesday, 26 May, 2015 Permalink | Reply

            If the eigenvalue, or other eigen-thing, has to be rendered in English only it’s usually turned into “characteristic value” or “characteristic vector” or so on. And that’s fair enough.

            The eigenvalues (or other things) can be seen as kind of the spectroscopic analysis of a mathematical object. (This is a very loose metaphor.) If you’ve got a mathematical object describing a system, then the eigenvalues (eigenvectors, eigenfunctions, et cetera) can be simpler ways to describe how the system behaves.

            Like

    • scifihammy 4:00 pm on Monday, 25 May, 2015 Permalink | Reply

      Your A to Z is a good idea. Interesting reading :)

      Liked by 1 person

      • Joseph Nebus 2:41 am on Tuesday, 26 May, 2015 Permalink | Reply

        Thank you. It’s not my idea, but I can at least recognize a fruitful one when it’s presented to me.

        Liked by 1 person

    • Barb Knowles 10:54 pm on Tuesday, 26 May, 2015 Permalink | Reply

      I just think it’s a very cool word.

      Like

    • sheldonk2014 8:48 pm on Thursday, 11 June, 2015 Permalink | Reply

      I like the idea that you have to validate a solution,it sounds very definitive

      Like

      • Joseph Nebus 4:13 am on Saturday, 13 June, 2015 Permalink | Reply

        It’s, sadly, the boring part of learning something. You know the part of mathematics class where you get, say, the root of a polynomial and then you’re supposed to go back and put it in to the polynomial and see if it really is zero? That’s validation. But for a simple problem it’s dull because if you did the work right there’s nothing being revealed.

        Where it’s important is when you try modeling something new and interesting because you don’t know that you made all the right choices in your model. (Also you might not be sure you calculated things based on the model right.) But that’s not something most mathematics classes reach, not below the upper levels of undergraduate life anyway.

        Like

  • Joseph Nebus 8:00 pm on Monday, 9 December, 2013 Permalink | Reply
    Tags: differential equations,   

    What is Physics all about? 


    Over on the Reading Penrose blog, Jean Louis Van Belle (and I apologize if I’ve got the name capitalized or punctuated wrong but I couldn’t find the author’s name except in a run-together, uncapitalized form) is trying to understand Roger Penrose’s Road to Reality, about the various laws of physics as we understand them. In the entry for the 6th of December, “Ordinary Differential equations (II)”, he gets to the question “What’s Physics All About?” and comes to what I have to agree is the harsh fact: a lot of physics is about solving differential equations.

    Some of them are ordinary differential equations, some of them are partial differential equations, but really, a lot of it is differential equations. Some of it is setting up models for differential equations. Here, though, he looks at a number of ordinary differential equations and how they can be classified. The post is a bit cryptic — he intends the blog to be his working notes while he reads a challenging book — but I think it’s still worth recommending as a quick tour through some of the most common, physics-relevant, kinds of ordinary differential equation.

     
  • Joseph Nebus 9:05 pm on Saturday, 2 November, 2013 Permalink | Reply
    Tags: boolean logic, , differential equations, George Boole,   

    George Boole’s Birthday 


    The Maths History feed on Twitter reminded me that the second of November is the birthday of George Boole, one of a handful of people who’s managed to get a critically important computer data type named for him (others, of course, include Arthur von Integer and the Lady Annabelle String). Reminded is the wrong word; actually, I didn’t have any idea when his birthday was, other than that it was in the first half of the 19th century. To that extent I was right (it was 1815).

    He’s famous, to the extent anyone in mathematics who isn’t Newton or Leibniz is, for his work in logic. “Boolean algebra” is even almost the default term for the kind of reasoning done on variables that may have either of exactly two possible values, which match neatly to the idea of propositions being either true or false. He’d also publicized how neatly the study of logic and the manipulation of algebraic symbols could parallel one another, which is a familiar enough notion that it takes some imagination to realize that it isn’t obviously so.

    Boole also did work on linear differential equations, which are important because differential equations are nearly inevitably the way one describes a system in which the current state of the system affects how it is going to change, and linear differential equations are nearly the only kinds of differential equations that can actually be exactly solved. (There are some nonlinear differential equations that can be solved, but more commonly, we’ll find a linear differential equation that’s close enough to the original. Many nonlinear differential equations can also be approximately solved numerically, but that’s also quite difficult.)

    His MacTutor History of Mathematics biography notes that Boole (when young) spent five years trying to teach himself differential and integral calculus — money just didn’t allow for him to attend school or hire a tutor — although given that he was, before the age of fourteen, able to teach himself ancient Greek I can certainly understand his supposition that he just needed the right books and some hard work. Apparently, at age fourteen he translated a poem by Meleager — I assume the poet from the first century BCE, though MacTutor doesn’t specify; there was also a Meleager who was briefly king of Macedon in 279 BCE, and another some decades before that who was a general serving Alexander the Great — so well that when it was published a local schoolmaster argued that a 14-year-old could not possibly have done that translation. He’d also, something I didn’t know until today, married Mary Everest, niece of the fellow whose name is on that tall mountain.

     
  • Joseph Nebus 10:18 pm on Tuesday, 15 October, 2013 Permalink | Reply
    Tags: car, differential equations, , , shock absorbers   

    From ElKement: On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace 


    I’m frightfully late on following up on this, but ElKement has another entry in the series regarding quantum field theory, this one engagingly titled “On The Relation Of Jurassic Park and Alien Jelly Flowing Through Hyperspace”. The objective is to introduce the concept of phase space, a way of looking at physics problems that marks maybe the biggest thing one really needs to understand if one wants to be not just a physics major (or, for many parts of the field, a mathematics major) and a grad student.

    As an undergraduate, it’s easy to get all sorts of problems in which, to pick an example, one models a damped harmonic oscillator. A good example of this is how one models the way a car bounces up and down after it goes over a bump, when the shock absorbers are working. You as a student are given some physical properties — how easily the car bounces, how well the shock absorbers soak up bounces — and how the first bounce went — how far the car bounced upward, how quickly it started going upward — and then work out from that what the motion will be ever after. It’s a bit of calculus and you might do it analytically, working out a complicated formula, or you might do it numerically, letting one of many different computer programs do the work and probably draw a picture showing what happens. That’s shown in class, and then for homework you do a couple problems just like that but with different numbers, and for the exam you get another one yet, and one more might turn up on the final exam.

    (More …)

     
    • elkement 9:29 am on Wednesday, 16 October, 2013 Permalink | Reply

      Thanks again, Joseph. I guess your example concerned with a car makes more sense than my rather philosophical ramblings so I will reblog your post. From questions I have got on my article I conclude that the introduction of those large number of dimensions was probably not self-explanatory, so I am working on an update to this article with more illustrations related to hyperspace. Since I have not seen Liouville equation popularized (in the same way as ‘chaos theory’ is) I have to prepare some illustrations myself which is going to take a while.

      I am also considering to pick a non-physics example as I have noticed that ‘many dimensions’ in the sense of these statistical sense (different states of a single system) got confused with those fancy dimensions related to string theory. I think I need to stress more that this is a space of ‘possibilities’ and not some space which is out there (and we are just those infamouse beetles on an inflating balloon that are not able to feel those dimensions).

      Like

      • Joseph Nebus 3:02 am on Friday, 18 October, 2013 Permalink | Reply

        The car example was actually on my mind because I was thinking about writing a post regarding how I set up a differential equations laboratory project (where the students had to answer a set of problems both analytically and numerically, using a Mathematica-like program called Maple to work it out), and then I realized it was a pretty good setup to introducing phase space. I imagine trying to introduce them has to be done in either pendulums or masses-on-springs for want of other simple systems that are interesting and don’t have too many spatial dimensions.

        Also I suspect you’re right in talk about dimensions confusing people because “dimension” is a word with so many meanings. I’m not sure of a good alternate word to use, though, and if I do carry on I might just give in and speak of “phase space dimension” instead. The construction is horrible but at least it makes the context clear enough.

        Like

    • elkement 9:32 am on Wednesday, 16 October, 2013 Permalink | Reply

      Reblogged this on Theory and Practice of Trying to Combine Just Anything and commented:
      I am still working on a more self-explanatory update to my previous physics post … trying to explain that multi-dimensional hyperspace is really a space of all potential states a single system might exhibit – a space of possibilities and not those infamous multi-dimensional world that might really be ‘out there’ according to string theorists. In the meantime, please enjoy mathematician Joseph Nebus’ additions to my post which includes a down-to-earth example.

      Like

    • elkement 9:37 am on Wednesday, 16 October, 2013 Permalink | Reply

      (I would really like the typos in my reblog here ;-) )

      Like

      • Joseph Nebus 3:04 am on Friday, 18 October, 2013 Permalink | Reply

        Aw, and I do want to thank you for kindly reblogging me. I think yours might be my first referral.

        Like

    • Pairodox Farm 3:58 pm on Wednesday, 16 October, 2013 Permalink | Reply

      It’s nice to see Elke’s influence spreading far and wide about the internet. D

      Like

      • Joseph Nebus 3:17 am on Friday, 18 October, 2013 Permalink | Reply

        It is, and I’m glad to have found her blog.

        Like

        • elkement 7:12 am on Friday, 18 October, 2013 Permalink | Reply

          Thanks, Dave and Joseph! Flattering to hear people talk about me (in a nice way) :-)

          I am also very grateful for those referrals of yours, Joseph. My blog is rather anti-popular – concluding from my numbers (views, followers) when comparing with other bloggers who blog for about the same time. Probably my blog is also not enough ‘niche’ and specialist and too much all over the place (Search term poetry, physics, random thoughts on society and the corporate world…)
          Blog visitors constitute probably another hyperspace whose dynamics should be modeled. I guess we might find out that occasional spikes in popularity (Freshly Pressed, going viral) are dictated a chance.

          But I believe it is better to have a small number of loyal followers you can have very interesting discussions with instead of 1000 people following and liking your blog because of its established popularity (winner-take-all effect in networks). I suppose if we would investigate the numbers of followers on all blogs in the world, assuming a reasonable time you need to ‘invest’ in reading as a follower… we would find out that most followers of those very popular blogs actually do not really follow because then they would need to spend 24/7 on speed reading.

          This was probably quite random a comment. but I am re-reading Nassim Taleb’s books currently – so I am intrigued by anything Black-Swan-like …

          Like

          • Joseph Nebus 11:35 pm on Tuesday, 22 October, 2013 Permalink | Reply

            Ah, well, the blog is always greener … I worry about my own lack of popularity and rather admire the community you’ve got over at your site. I haven’t figured out the knack for drawing people out to making comments, yet. I might start giving out challenges.

            As you say, though, a small group of people who’re actually listening is rewarding, and I just hope to get one new reader for every three spambots offering to get me to get rich by blogging.

            Oddly I know that I got one of Taleb’s books from the library, as an audio book (I enjoy listening to them while commuting), but I didn’t get far into it. I’m not sure what happened; conceivably it was recalled back to the library.

            Like

  • Joseph Nebus 8:50 pm on Monday, 8 July, 2013 Permalink | Reply
    Tags: , differential equations,   

    On exact and inexact differentials 


    The CarnotCycle blog recently posted a nice little article titled “On Exact And Inexact Differentials” and I’m bringing it to people’s attention because its the sort of thing which would have been extremely useful to me at a time when I was reading calculus-heavy texts that just assumed you knew what exact differentials were, without being aware that you probably missed the day in intro differential equations when they were explained. (That was by far my worst performance in a class. I have no excuse.)

    So this isn’t going to be the most accessible article you run across on my blog here, until I finish making the switch to a full-on advanced statistical mechanics course. But if you start getting into, particularly, thermodynamics and wonder where this particular and slightly funky string of symbols comes from, this is a nice little warmup. For extra help, CarnotCycle also explains what makes something an inexact differential.

    Like

    carnotcycle

    From the search term phrases that show up on this blog’s stats, CarnotCycle detects that a significant segment of visitors are studying foundation level thermodynamics  at colleges and universities around the world. So what better than a post that tackles that favorite test topic – exact and inexact differentials.

    When I was an undergraduate, back in the time of Noah, we were first taught the visual approach to these things. Later we dispensed with diagrams and got our answers purely through the operations of calculus, but either approach is equally instructive. CarnotCycle herewith presents them both.

    – – – –

    The visual approach

    Ok, let’s start off down the visual track by contemplating the following pair of pressure-volume diagrams:

    visual track

    The points A and B have identical coordinates on both diagrams, with A and B respectively representing the initial and final states of a closed PVT system, such as an…

    View original post 782 more words

     
  • Joseph Nebus 2:50 am on Wednesday, 12 September, 2012 Permalink | Reply
    Tags: apple, , , differential equations, , , ,   

    Reading the Comics, September 11, 2012 


    Since the last installment of these mathematics-themed comic strips there’s been a steady drizzle of syndicated comics touching on something mathematical. This probably reflects the back-to-school interests that are naturally going to interest the people drawing either Precocious Children strips or Three Generations And A Dog strips.

    (More …)

     
    • Tim Erickson 6:18 am on Saturday, 15 September, 2012 Permalink | Reply

      Nice review! Alas, too many mainstream comics usually mention math as, “its’ horrible, it’s hard, kids hate it, parents couldn’t do it.”

      Like

      • Joseph Nebus 4:58 am on Sunday, 16 September, 2012 Permalink | Reply

        Thank you. Unfortunately, yeah, “we can’t understand it” is the obvious and easiest joke to make about mathematics, and not enough people try to dig for a deeper gag.

        I am always genuinely delighted to find a comic strip making a joke about the New Math, possibly because I grew up learning math more or less the New way and felt no terror at it, possibly because I’m always delighted by jokes that start out 30 years past their expiration date.

        Like

  • Joseph Nebus 12:16 am on Saturday, 19 May, 2012 Permalink | Reply
    Tags: , differential equations, , , Jacob Hermann, , slope, Vincenzo Riccati   

    Why The Slope Is Too Interesting 


    After we have the intercept, the other thing we need is the slope. This is a very easy thing to start calculating and it’s extremely testable, but the idea weaves its way deep into all mathematics. It’s got an obvious physical interpretation. Imagine the x-coordinates are how far we are from some reference point in a horizontal direction, and the y-coordinates are how far we are from some reference point in the vertical direction. Then the slope is just the grade of the line: how much we move up or down for a given movement forward or back. It’s easy to calculate, it’s kind of obvious, so here’s what’s neat about it.

    (More …)

     
  • Joseph Nebus 12:10 am on Saturday, 24 September, 2011 Permalink | Reply
    Tags: best fit, , Christopher Hibbert, differential equations, finite elements, Francis Sheppard, , , prices, , William Herschel, Wolfgang Schivelbusch   

    Did King George III pay too little for astronomers or too much for tea? 


    In the opening pages of his 1998 biography George III: A Personal History, Christopher Hibbert tosses a remarkable statement into a footnote just after describing the allowance of Frederick, Prince of Wales, at George III’s birth:

    Because of the fluctuating rate of inflation and other reasons it is not really practicable to translate eighteen-century sums into present-day equivalents. Multiplying the figures in this book by about sixty should give a very rough guide for the years before 1793. For the years of war between 1793 and 1815 the reader should multiply by about thirty, and thereafter by about forty.

    “Not really practical” is wonderful understatement: it’s barely possible to compare the prices of things today to those of a half-century ago, and the modern economy at least existed in cartoon back then. I could conceivably have been paid for programming computers back then, but it would be harder for me to get into the field. To go back 250 years — before electricity, mass markets, public education, mass production, general incorporation laws, and nearly every form of transportation not muscle or wind-powered — and try to compare prices is nonsense. We may as well ask how many haikus it would take to tell Homer’s Odyssey, or how many limericks Ovid’s Metamorphoses would be.
    (More …)

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: