Tagged: mst3k Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Thursday, 18 May, 2017 Permalink | Reply
    Tags: , , , , mst3k, ,   

    Everything Interesting There Is To Say About Springs 


    I need another supplemental essay to get to the next part in Why Stuff Can Orbit. (Here’s the last part.) You probably guessed it’s about springs. They’re useful to know about. Why? That one killer Mystery Science Theater 3000 short, yes. But also because they turn up everywhere.

    Not because there are literally springs in everything. Not with the rise in anti-spring political forces. But what makes a spring is a force that pushes something back where it came from. It pushes with a force that grows just as fast as the distance from where it came grows. Most anything that’s stable, that has some normal state which it tends to look like, acts like this. A small nudging away from the normal state gets met with some resistance. A bigger nudge meets bigger resistance. And most stuff that we see is stable. If it weren’t stable it would have broken before we got there.

    (There are exceptions. Stable is, sometimes, about perspective. It can be that something is unstable but it takes so long to break that we don’t have to worry about it. Uranium, for example, is dying, turning slowly into stable elements like lead and helium. There will come a day there’s none left in the Earth. But it takes so long to break down that, barring surprises, the Earth will have broken down into something else first. And it may be that something is unstable, but it’s created by something that’s always going on. Oxygen in the atmosphere is always busy combining with other chemicals. But oxygen stays in the atmosphere because life keeps breaking it out of other chemicals.)

    Now I need to put in some terms. Start with your thing. It’s on a spring, literally or metaphorically. Don’t care. If it isn’t being pushed in any direction then it’s at rest. Or it’s at an equilibrium. I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? I can tell you what it acts like. It’s your business whether it should. Anyway, your thing has an equilibrium.

    Next term is the displacement. It’s how far your thing is from the equilibrium. If it’s really a block of wood on a spring, like it is in high school physics, this displacement is how far the spring is stretched out. In equations I’ll represent this as ‘x’ because I’m not going to go looking deep for letters for something like this. What value ‘x’ has will change with time. This is what makes it a physics problem. If we want to make clear that ‘x’ does depend on time we might write ‘x(t)’. We might go all the way and start at the top of the page with ‘x = x(t)’, just in case.

    If ‘x’ is a positive number it means your thing is displaced in one direction. If ‘x’ is a negative number it was displaced in the opposite direction. By ‘one direction’ I mean ‘to the right, or else up’. By ‘the opposite direction’ I mean ‘to the left, or else down’. Yes, you can pick any direction you like but why are you making life harder for everyone? Unless there’s something compelling about the setup of your thing that makes another choice make sense just go along with what everyone else is doing. Apply your creativity and iconoclasm where it’ll make your life better instead.

    Also, we only have to worry about one direction. This might surprise you. If you’ve played much with springs you might have noticed how they’re three-dimensional objects. You can set stuff swinging back and forth in two directions at once. That’s all right. We can describe a two-dimensional displacement as a displacement in one direction plus a displacement perpendicular to that. And if there’s no such thing as friction, they won’t interact. We can pretend they’re two problems that happen to be running on the same spring at the same time. So here I declare: we can ignore friction and pretend it doesn’t matter. We don’t have to deal with more than one direction at a time.

    (It’s not only friction. There’s problems about how energy gets transmitted between ways the thing can oscillate. This is what causes what starts out as a big whack in one direction to turn into a middling little circular wobbling. That’s a higher level physics than I want to do right now. So here I declare: we can ignore that and pretend it doesn’t matter.)

    Whether your thing is displaced or not it’s got some potential energy. This can be as large or as small as you like, down to some minimum when your thing is at equilibrium. The potential energy we represent as a number named ‘U’ because of good reasons that somebody surely had. The potential energy of a spring depends on the square of the displacement. We can write its value as ‘U = ½ k x2‘. Here ‘k’ is a number known as the spring constant. It describes how strongly the spring reacts; the bigger ‘k’ is, the more any displacement’s met with a contrary force. It’ll be a positive number. ½ is that same old one-half that you know from ideas being half-baked or going-off being half-cocked.

    Potential energy is great. If you can describe a physics problem with its energy you’re in good shape. It lets us bring physical intuition into understanding things. Imagine a bowl or a Habitrail-type ramp that’s got the cross-section of your potential energy. Drop a little marble into it. How the marble rolls? That’s what your thingy does in that potential energy.

    Also we have mathematics. Calculus, particularly differential equations, lets us work out how the position of your thing will change. We need one more piece for this. That’s the momentum of your thing. Momentum is traditionally represented with the letter ‘p’. And now here’s how stuff moves when you know the potential energy ‘U’:

    \frac{dp}{dt} = - \frac{\partial U}{\partial x}

    Let me unpack that. \frac{dp}{dt} — also known as \frac{d}{dt}p if that looks better — is “the derivative of p with respect to t”. It means “how the value of the momentum changes as the time changes”. And that is equal to minus one times …

    You might guess that \frac{\partial U}{\partial x} — also written as \frac{\partial}{\partial x} U — is some kind of derivative. The \partial looks kind of like a cursive d, after all. It’s known as the partial derivative, because it means we look at how ‘U’ changes as ‘x’ and nothing else at all changes. With the normal, ‘d’ style full derivative, we have to track how all the variables change as the ‘t’ we’re interested in changes. In this particular problem the difference doesn’t matter. But there are problems where it does matter and that’s why I’m careful about the symbols.

    So now we fall back on how to take derivatives. This gives us the equation that describes how the physics of your thing on a spring works:

    \frac{dp}{dt} = - k x

    You’re maybe underwhelmed. This is because we haven’t got any idea how the momentum ‘p’ relates to the displacement ‘x’. Well, we do, because I know and if you’re still reading at this point you know full well what momentum is. But let me make it official. Momentum is, for this kind of thing, the mass ‘m’ of your thing times how its position is changing, which is \frac{dx}{dt} . The mass of your thing isn’t changing. If you’re going to let it change then we’re doing some screwy rocket problem and that’s a different article. So its easy to get the momentum out of that problem. We get instead the second derivative of the displacement with respect to time:

    m\frac{d^2 x}{dt^2} = - kx

    Fine, then. Does that tell us anything about what ‘x(t)’ is? Not yet, but I will now share with you one of the top secrets that only real mathematicians know. We will take a guess to what the answer probably is. Then we’ll see in what circumstances that answer could possibly be right. Does this seem ad hoc? Fine, so it’s ad hoc. Here is the secret of mathematicians:

    It’s fine if you get your answer by any stupid method you like, including guessing and getting lucky, as long as you check that your answer is right.

    Oh, sure, we’d rather you get an answer systematically, since a system might give us ideas how to find answers in new problems. But if all we want is an answer then, by definition, we don’t care where it came from. Anyway, we’re making a particular guess, one that’s very good for this sort of problem. Indeed, this guess is our system. A lot of guesses at solving differential equations use exactly this guess. Are you ready for my guess about what solves this? Because here it is.

    We should expect that

    x(t) = C e^{r t}

    Here ‘C’ is some constant number, not yet known. And ‘r’ is some constant number, not yet known. ‘t’ is time. ‘e’ is that number 2.71828(etc) that always turns up in these problems. Why? Because its derivative is very easy to take, and if we have to take derivatives we want them to be easy to take. The first derivative of Ce^{rt} with respect to ‘t’ is r Ce^{rt} . The second derivative with respect to ‘t’ is r^2 Ce^{rt} . so here’s what we have:

    m r^2 Ce^{rt} = - k Ce^{rt}

    What we’d like to find are the values for ‘C’ and ‘r’ that make this equation true. It’s got to be true for every value of ‘t’, yes. But this is actually an easy equation to solve. Why? Because the C e^{rt} on the left side has to equal the C e^{rt} on the right side. As long as they’re not equal to zero and hey, what do you know? C e^{rt} can’t be zero unless ‘C’ is zero. So as long as ‘C’ is any number at all in the world except zero we can divide this ugly lump of symbols out of both sides. (If ‘C’ is zero, then this equation is 0 = 0 which is true enough, I guess.) What’s left?

    m r^2 = -k

    OK, so, we have no idea what ‘C’ is and we’re not going to have any. That’s all right. We’ll get it later. What we can get is ‘r’. You’ve probably got there already. There’s two possible answers:

    r = \pm\sqrt{-\frac{k}{m}}

    You might not like that. You remember that ‘k’ has to be positive, and if mass ‘m’ isn’t positive something’s screwed up. So what are we doing with the square root of a negative number? Yes, we’re getting imaginary numbers. Two imaginary numbers, in fact:

    r = \imath \sqrt{\frac{k}{m}}, r = - \imath \sqrt{\frac{k}{m}}

    Which is right? Both. In some combination, too. It’ll be a bit with that first ‘r’ plus a bit with that second ‘r’. In the differential equations trade this is called superposition. We’ll have information that tells us how much uses the first ‘r’ and how much uses the second.

    You might still be upset. Hey, we’ve got these imaginary numbers here describing how a spring moves and while you might not be one of those high-price physicists you see all over the media you know springs aren’t imaginary. I’ve got a couple responses to that. Some are semantic. We only call these numbers “imaginary” because when we first noticed they were useful things we didn’t know what to make of them. The label is an arbitrary thing that doesn’t make any demands of the numbers. If we had called them, oh, “Cardanic numbers” instead would you be upset that you didn’t see any Cardanos in your springs?

    My high-class semantic response is to ask in exactly what way is the “square root of minus one” any less imaginary than “three”? Can you give me a handful of three? No? Didn’t think so.

    And then the practical response is: don’t worry. Exponentials raised to imaginary numbers do something amazing. They turn into sine waves. Well, sine and cosine waves. I’ll spare you just why. You can find it by looking at the first twelve or so posts of any pop mathematics blog and its article about how amazing Euler’s Formula is. Given that Euler published, like, 2,038 books and papers through his life and the fifty years after his death it took to clear the backlog you might think, “Euler had a lot of Formulas, right? Identities too?” Yes, he did, but you’ll know this one when you see it.

    What’s important is that the displacement of your thing on a spring will be described by a function which looks like this:

    x(t) = C_1 e^{\sqrt{\frac{k}{m}} t} + C_2 e^{-\sqrt{\frac{k}{m}} t}

    for two constants, ‘C1‘ and ‘C2‘. These were the things we called ‘C’ back when we thought the answer might be Ce^{rt} ; there’s two of them because there’s two r’s. I give you my word this is equivalent to a formula like this, but you can make me show my work if you must:

    x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    for some (other) constants ‘A’ and ‘B’. Cosine and sine are the old things you remember from learning about cosine and sine.

    OK, but what are ‘A’ and ‘B’?

    Generically? We don’t care. Some numbers. Maybe zero. Maybe not. The pattern, how the displacement changes over time, will be the same whatever they are. It’ll be regular oscillation. At one time your thing will be as far from the equilibrium as it gets, and not moving toward or away from the center. At one time it’ll be back at the center and moving as fast as it can. At another time it’ll be as far away from the equilibrium as it gets, but on the other side. At another time it’ll be back at the equilibrium and moving as fast as it ever does, but the other way. How far is that maximum? What’s the fastest it travels?

    The answer’s in how we started. If we start at the equilibrium without any kind of movement we’re never going to leave the equilibrium. We have to get nudged out of it. But what kind of nudge? There’s three ways you can do to nudge something out.

    You can tug it out some and let it go from rest. This is the easiest: then ‘A’ is however big your tug was and ‘B’ is zero.

    You can let it start from equilibrium but give it a good whack so it’s moving at some initial velocity. This is the next-easiest: ‘A’ is zero, and ‘B’ is … no, not the initial velocity. You need to look at what the velocity of your thing is at the start. That’s the first derivative:

    \frac{dx}{dt} = -\sqrt{\frac{k}{m}}A sin\left(\sqrt{\frac{k}{m}} t\right) + \sqrt{\frac{k}{m}} B sin\left(\sqrt{\frac{k}{m}} t\right)

    The start is when time is zero because we don’t need to be difficult. when ‘t’ is zero the above velocity is \sqrt{\frac{k}{m}} B . So that product has to be the initial velocity. That’s not much harder.

    The third case is when you start with some displacement and some velocity. A combination of the two. Then, ugh. You have to figure out ‘A’ and ‘B’ that make both the position and the velocity work out. That’s the simultaneous solutions of equations, and not even hard equations. It’s more work is all. I’m interested in other stuff anyway.

    Because, yeah, the spring is going to wobble back and forth. What I’d like to know is how long it takes to get back where it started. How long does a cycle take? Look back at that position function, for example. That’s all we need.

    x(t) = A cos\left(\sqrt{\frac{k}{m}} t\right) + B sin\left(\sqrt{\frac{k}{m}} t\right)

    Sine and cosine functions are periodic. They have a period of 2π. This means if you take the thing inside the parentheses after a sine or a cosine and increase it — or decrease it — by 2π, you’ll get the same value out. What’s the first time that the displacement and the velocity will be the same as their starting values? If they started at t = 0, then, they’re going to be back there at a time ‘T’ which makes true the equation

    \sqrt{\frac{k}{m}} T = 2\pi

    And that’s going to be

    T = 2\pi\sqrt{\frac{m}{k}}

    Maybe surprising thing about this: the period doesn’t depend at all on how big the displacement is. That’s true for perfect springs, which don’t exist in the real world. You knew that. Imagine taking a Junior Slinky from the dollar store and sticking a block of something on one end. Imagine stretching it out to 500,000 times the distance between the Earth and Jupiter and letting go. Would it act like a spring or would it break? Yeah, we know. It’s sad. Think of the animated-cartoon joy a spring like that would produce.

    But this period not depending on the displacement is true for small enough displacements, in the real world. Or for good enough springs. Or things that work enough like springs. By “true” I mean “close enough to true”. We can give that a precise mathematical definition, which turns out to be what you would mean by “close enough” in everyday English. The difference is it’ll have Greek letters included.

    So to sum up: suppose we have something that acts like a spring. Then we know qualitatively how it behaves. It oscillates back and forth in a sine wave around the equilibrium. Suppose we know what the spring constant ‘k’ is. Suppose we also know ‘m’, which represents the inertia of the thing. If it’s a real thing on a real spring it’s mass. Then we know quantitatively how it moves. It has a period, based on this spring constant and this mass. And we can say how big the oscillations are based on how big the starting displacement and velocity are. That’s everything I care about in a spring. At least until I get into something wild like several springs wired together, which I am not doing now and might never do.

    And, as we’ll see when we get back to orbits, a lot of things work close enough to springs.

     
    • tkflor 8:13 pm on Saturday, 20 May, 2017 Permalink | Reply

      “I don’t want to call this the ideal or natural state. That suggests some moral superiority to one way of existing over another, and how do I know what’s right for your thing? ”
      Why don’t you call it a “ground state”?

      Like

      • Joseph Nebus 6:49 am on Friday, 26 May, 2017 Permalink | Reply

        You’re right. This is a ground state.

        For folks just joining in, the “ground state” is what the system looks like when it’s got the least possible energy. At least the least energy consistent with it being a system at all. For a spring problem that’s the one where the thing is at rest, at the center, not displaced at all.

        In a more complicated system you can have an equilibrium that’s stable and that isn’t the ground state. That isn’t the case here, but I wonder if thinking about that didn’t make me avoid calling it a ground state.

        Like

  • Joseph Nebus 6:00 pm on Tuesday, 28 February, 2017 Permalink | Reply
    Tags: , , , Julian Dates, mst3k, , US Naval Observatory   

    How To Work Out The Length Of Time Between Two Dates 


    September 1999 was a heck of a month you maybe remember. There that all that excitement of the Moon being blasted out of orbit thanks to the nuclear waste pile up there getting tipped over or something. And that was just as we were getting over the final new episode of Mystery Science Theater 3000‘s first airing. That episode was number 1003, Merlin’s Shop of Mystical Wonders, which aired a month after the season finale because of one of those broadcast rights tangles that the show always suffered through.

    Time moves on, and strange things happen, and show co-creator and first host Joel Hodgson got together a Kickstarter and a Netflix deal. The show’s Season Eleven is supposed to air starting the 14th of April, this year. The natural question: how long will we go, then, between new episodes of Mystery Science Theater 3000? Or more generally, how do you work out how long it is between two dates?

    The answer is dear Lord under no circumstances try to work this out yourself. I’m sorry to be so firm. But the Gregorian calendar grew out of a bunch of different weird influences. It’s just hard to keep track of all the different 31- and 30-day months between two events. And then February is all sorts of extra complications. It’s especially tricky if the interval spans a century year, like 2000, since the majority of those are not leap years, except that the one century year I’m likely to experience was. And then if your interval happens to cross the time the local region switched from the Julian to the Gregorian calendar —

    So my answer is don’t ever try to work this out yourself. Never. Just refuse the problem if you’re given it. If you’re a consultant charge an extra hundred dollars for even hearing the problem.

    All right, but what if you really absolutely must know for some reason? I only know one good answer. Convert the start and the end dates of your interval into Julian Dates and subtract one from the other. I mean subtract the smaller number from the larger. Julian Dates are one of those extremely minor points of calendar use. They track the number of days elapsed since noon, Universal Time, on the Julian-calendar date we call the 1st of January, 4713 BC. The scheme, for years, was set up in 1583 by Joseph Justus Scalinger, calendar reformer, who wanted for convenience an index year so far back that every historically known event would have a positive number. In the 19th century the astronomer John Herschel expanded it to date-counting.

    Scalinger picked the year from the convergence of a couple of convenient calendar cycles about how the sun and moon move as well as the 15-year indiction cycle that the Roman Empire used for tax matters (and that left an impression on European nations). His reasons don’t much matter to us. The specific choice means we’re not quite three-fifths of the way through the days in the 2,400,000’s. So it’s not rare to modify the Julian Date by subtracting 2,400,000 from it. The date starts from noon because astronomers used to start their new day at noon, which was more convenient for logging a whole night’s observations. Since astronomers started taking pictures of stuff and looking at them later they’ve switched to the new day starting at midnight like everybody else, but you know what it’s like changing an old system.

    This summons the problem: so how do I know many dates passed between whatever day I’m interested in and the Julian Calendar 1st of January, 4713 BC? Yes, there’s a formula. No, don’t try to use it. Let the fine people at the United States Naval Observatory do the work for you. They know what they’re doing and they’ve had this calculator up for a very long time without any appreciable scandal accruing to it. The system asks you for a time of day, because the Julian Date increases as the day goes on. You can just make something up if the time doesn’t matter. I normally leave it on midnight myself.

    So. The last episode of Mystery Science Theater 3000 to debut, on the 12th of September, 1999, did so on Julian Date 2,451,433. (Well, at 9 am Eastern that day, but nobody cares about that fine grain a detail.) The new season’s to debut on Netflix the 14th of April, 2017, which will be Julian Date 2,457,857. (I have no idea if there’s a set hour or if it’ll just become available at 12:01 am in whatever time zone Netflix Master Command’s servers are in.) That’s a difference of 6,424 days. You’re on your own in arguing about whether that means it was 6,424 or 6,423 days between new episodes.

    If you do take anything away from this, though, please let it be the warning: never try to work out the interval between dates yourself.

     
    • elkement (Elke Stangl) 9:31 am on Friday, 3 March, 2017 Permalink | Reply

      And I figured the routine date and time conversion mess you face as a software developer is a challenge ;-) …

      Like

      • Joseph Nebus 4:53 am on Saturday, 11 March, 2017 Permalink | Reply

        Oh you have no idea. In that one ancient database was designed with every column a string, and dates entered as literally, eg, ’03/10/2017′. That string of text. Which was all right when the date just had to be shown on-screen but then I had said it should be easy to include a date range, unaware of just what was in the database. Also, that there are so many mistakes too. Or people entering 00/00/0000 when the date wasn’t available.

        Liked by 1 person

  • Joseph Nebus 9:26 am on Saturday, 14 March, 2015 Permalink | Reply
    Tags: Buffon's Needle Problem, Comte de Buffon, mst3k, , , , , stochastics   

    Calculating Pi Terribly 


    I’m not really a fan of Pi Day. I’m not fond of the 3/14 format for writing dates to start with — it feels intolerably ambiguous to me for the first third of the month — and it requires reading the / as a . to make sense, when that just is not how the slash works. To use the / in any of its normal forms then Pi Day should be the 22nd of July, but that’s incompatible with the normal American date-writing conventions and leaves a day that’s nominally a promotion of the idea that “mathematics is cool” in the middle of summer vacation. This particular objection evaporates if you use . as the separator between month and day, but I don’t like that either, since it uses something indistinguishable from a decimal point as something which is not any kind of decimal point.

    Also it encourages people to post a lot of pictures of pies, and make jokes about pies, and that’s really not a good pun. It plays on the coincidence of sounds without having any of the kind of ambiguity or contrast between or insight into concepts that normally make for the strongest puns, and it hasn’t even got the spontaneity of being something that just came up in conversation. We could use better jokes is my point.

    But I don’t want to be relentlessly down about what’s essentially a bit of whimsy. (Although, also, dropping the ’20’ from 2015 so as to make this the Pi Day Of The Century? Tom Servo has a little song about that sort of thing.) So, here’s a neat and spectacularly inefficient way to generate the value of pi, that doesn’t superficially rely on anything to do with circles or diameters, and that’s probability-based. The wonderful randomness of the universe can give us a very specific and definite bit of information.

    (More …)

     
    • abyssbrain 10:28 am on Saturday, 14 March, 2015 Permalink | Reply

      When I first read about this method for calculating pi before, I have entertained the idea of trying it myself but I quickly discarded that idea, since who knows how long it would take before I would reach pi :)

      Btw, I’m also very confused with the American way of writing dates since I’m used to either ddmmyyyy format or yyyymmdd format. So, March 14, 2015 for me is 14/3/2015. I’ve also just posted some of my reasons why I don’t celebrate pi day…

      Like

      • Joseph Nebus 12:25 am on Monday, 16 March, 2015 Permalink | Reply

        Oh, it would just take forever to find pi using needles and ruled lines. You’d do considerably better if you drew a quarter-circle on a square dartboard, and tossed darts at it, counting the ratio of darts that hit inside the quarter-circle to darts outside. At least you’d have a better night of it.

        I don’t know why the United States uses the month-day-year format, particularly since it hasn’t got much (any?) use elsewhere in the world. My suspicion is that there probably was a time when both month-day and day-month were common enough in English-speaking nations and the United States settled on one format while the United Kingdom another back in the 19th Century back when stuff standardized.

        Like

        • abyssbrain 1:08 am on Monday, 16 March, 2015 Permalink | Reply

          Well, it seems widespread because of US websites like Google use mmddyyyy format by default and most of the top sites are from the US…

          Though I have noticed that they are now slowly changing the date format of many wikipedia articles to ddmmyyyy format.

          Like

          • Joseph Nebus 9:05 pm on Tuesday, 17 March, 2015 Permalink | Reply

            I have noticed what looks like a slow shift in american use to day-month-year format, at least when the month is given its proper name rather than a number. The year-month-day order seems irresistible if you’re determined to stick to writing things as digits, for reasons I have to agree are pretty solid.

            Anyway, there does seem to be something logical about sticking to one logical path about whether the thing written first should be the thing most likely to change and the thing written last the least likely, or the other way around.

            Liked by 1 person

    • LFFL 6:09 pm on Saturday, 14 March, 2015 Permalink | Reply

      See. I opened this blog after avoiding it for such a long time and my headache started instantly! The agony.

      Like

      • Joseph Nebus 12:26 am on Monday, 16 March, 2015 Permalink | Reply

        Aw, dear. If it helps any I should have a fresh comic strips review in the next day or so. That’s nice and friendly.

        Like

  • Joseph Nebus 9:46 pm on Friday, 17 January, 2014 Permalink | Reply
    Tags: , , mst3k, , ,   

    What’s The Worst Way To Pack? 


    While reading that biography of Donald Coxeter that brought up that lovely triangle theorem, I ran across some mentions of the sphere-packing problem. That’s the treatment of a problem anyone who’s had a stack of oranges or golf balls has independently discovered: how can you arrange balls, all the same size (oranges are near enough), so as to have the least amount of wasted space between balls? It’s a mathematics problem with a lot of applications, both the obvious ones of arranging orange or golf-ball shipments, and less obvious ones such as sending error-free messages. You can recast the problem of sending a message so it’s understood even despite errors in coding, transmitting, receiving, or decoding, as one of packing equal-size balls around one another.

    A collection of Mystery Science Theater 3000 foam balls which I got as packing material when I ordered some DVDs.

    The “packing density” is the term used to say how much of a volume of space can be filled with balls of equal size using some pattern or other. Patterns called the cubic close packing or the hexagonal close packing are the best that can be done with periodic packings, ones that repeat some base pattern over and over; they fill a touch over 74 percent of the available space with balls. If you don’t want to follow the Mathworld links before, just get a tub of balls, or crate of oranges, or some foam Mystery Science Theater 3000 logo balls as packing materials when you order the new DVD set, and play around with a while and you’ll likely rediscover them. If you’re willing to give up that repetition you can get up to nearly 78 percent. Finding these efficient packings is known as the Kepler conjecture, and yes, it’s that Kepler, and it did take a couple centuries to show that these were the most efficient packings.

    While thinking about that I wondered: what’s the least efficient way to pack balls? The obvious answer is to start with a container the size of the universe, and then put no balls in it, for a packing fraction of zero percent. This seems to fall outside the spirit of the question, though; it’s at least implicit in wondering the least efficient way to pack balls to suppose that there’s at least one ball that exists.

    (More …)

     
    • Chiaroscuro 6:59 am on Saturday, 18 January, 2014 Permalink | Reply

      Without knowing the mathematical definition, I’d figure a ‘packing of spheres’ involves an arrangement such that:
      #1 There are no spaces/holes in the packing arrangement large enough so that a sphere could fit into a empty space.
      #2 Each sphere touches at least one other sphere.

      The Tetrahedral packing would seem to follow this; I’m.. not sure about the others.

      –Chi

      Like

      • Joseph Nebus 5:33 am on Monday, 20 January, 2014 Permalink | Reply

        I admit not knowing what’s actually considered standard in the field. The requirement that each sphere touch at least one other seems hard to dispute; after all, otherwise, you might have isolated strands that don’t interact in any way.

        That there not be any gaps big enough to fit another ball in seems like it might be a fair dispute. I could imagine, for example, the problem of a sparse packing being relevant to the large-scale structure of the cosmos, as it seems to have some strands of matter and incredible voids.

        I wouldn’t be surprised if there’s a term of art for packings without such gaps and packings with, though I don’t know what that might be (although “sparse packings” seems like the sort of thing that might be used).

        Like

    • elkement 1:26 pm on Monday, 20 January, 2014 Permalink | Reply

      My perspective on that packings is that of solid-state physics – so I would suppose that real-life packings would need to be stabilized by electromagnetic forces between these “particles”, not so much by gravity.

      Like

      • Joseph Nebus 12:39 am on Wednesday, 22 January, 2014 Permalink | Reply

        I think you’re probably right that in this sort of problem, at least for solid-state physics, gravity wouldn’t be a consideration and it’d just be the pairwise interactions of the balls themselves that matters.

        There’s a couple things people might mean by stability, with the one most relevant to the mathematics problem probably being how little displacements of the balls affect whatever force holds one ball to its neighbors. If you were trying to build a model of one of these out of styrofoam and toothpicks to show off in the mathematics library, stability with respect to gravity probably matters more.

        Like

    • Gerry 11:20 pm on Wednesday, 13 April, 2016 Permalink | Reply

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: