Tagged: distributions Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Friday, 11 November, 2016 Permalink | Reply
    Tags: , , distributions, , , ,   

    The End 2016 Mathematics A To Z: Ergodic 


    This essay follows up on distributions, mentioned back on Wednesday. This is only one of the ideas which distributions serve. Do you have a word you’d like to request? I figure to close ‘F’ on Saturday afternoon, and ‘G’ is already taken. But give me a request for a free letter soon and I may be able to work it in.

    Ergodic.

    There comes a time a physics major, or a mathematics major paying attention to one of the field’s best non-finance customers, first works on a statistical mechanics problem. Instead of keeping track of the positions and momentums of one or two or four particles she’s given the task of tracking millions of particles. It’s listed as a distribution of all the possible values they can have. But she still knows what it really is. And she looks at how to describe the way this distribution changes in time. If she’s the slightest bit like me, or anyone I knew, she freezes up this. Calculate the development of millions of particles? Impossible! She tries working out what happens to just one, instead, and hopes that gives some useful results.

    And then it does.

    It’s a bit much to call this luck. But it is because the student starts off with some simple problems. Particles of gas in a strong box, typically. They don’t interact chemically. Maybe they bounce off each other, but she’s never asked about that. She’s asked about how they bounce off the walls. She can find the relationship between the volume of the box and the internal gas pressure on the interior and the temperature of the gas. And it comes out right.

    She goes on to some other problems and it suddenly fails. Eventually she re-reads the descriptions of how to do this sort of problem. And she does them again and again and it doesn’t feel useful. With luck there’s a moment, possibly while showering, that the universe suddenly changes. And the next time the problem works out. She’s working on distributions instead of toy little single-particle problems.

    But the problem remains: why did it ever work, even for that toy little problem?

    It’s because some systems of things are ergodic. It’s a property that some physics (or mathematics) problems have. Not all. It’s a bit hard to describe clearly. Part of what motivated me to take this topic is that I want to see if I can explain it clearly.

    Every part of some system has a set of possible values it might have. A particle of gas can be in any spot inside the box holding it. A person could be in any of the buildings of her city. A pool ball could be travelling in any direction on the pool table. Sometimes that will change. Gas particles move. People go to the store. Pool balls bounce off the edges of the table.

    These values will have some kind of distribution. Look at where the gas particle is now. And a second from now. And a second after that. And so on, to the limits of human knowledge. Or to when the box breaks open. Maybe the particle will be more often in some areas than in others. Maybe it won’t. Doesn’t matter. It has some distribution. Over time we can say how often we expect to find the gas particle in each of its possible places.

    The same with whatever our system is. People in buildings. Balls on pool tables. Whatever.

    Now instead of looking at one particle (person, ball, whatever) we have a lot of them. Millions of particle in the box. Tens of thousands of people in the city. A pool table that somehow supports ten thousand balls. Imagine they’re all settled to wherever they happen to be.

    So where are they? The gas particle one is easy to imagine. At least for a mathematics major. If you’re stuck on it I’m sorry. I didn’t know. I’ve thought about boxes full of gas particles for decades now and it’s hard to remember that isn’t normal. Let me know if you’re stuck, and where you are. I’d like to know where the conceptual traps are.

    But back to the gas particles in a box. Some fraction of them are in each possible place in the box. There’s a distribution here of how likely you are to find a particle in each spot.

    How does that distribution, the one you get from lots of particles at once, compare to the first, the one you got from one particle given plenty of time? If they agree the system is ergodic. And that’s why my hypothetical physics major got the right answers from the wrong work. (If you are about to write me to complain I’m leaving out important qualifiers let me say I know. Please pretend those qualifiers are in place. If you don’t see what someone might complain about thank you, but it wouldn’t hurt to think of something I might be leaving out here. Try taking a shower.)

    The person in a building is almost certainly not an ergodic system. There’s buildings any one person will never ever go into, however possible it might be. But nearly all buildings have some people who will go into them. The one-person-with-time distribution won’t be the same as the many-people-at-once distribution. Maybe there’s a way to qualify things so that it becomes ergodic. I doubt it.

    The pool table, now, that’s trickier to say. For a real pool table no, of course not. An actual ball on an actual table rolls to a stop pretty soon, either from the table felt’s friction or because it drops into a pocket. Tens of thousands of balls would form an immobile heap on the table that would be pretty funny to see, now that I think of it. Well, maybe those are the same. But they’re a pretty boring same.

    Anyway when we talk about “pool tables” in this context we don’t mean anything so sordid as something a person could play pool on. We mean something where the table surface hasn’t any friction. That makes the physics easier to model. It also makes the game unplayable, which leaves the mathematical physicist strangely unmoved. In this context anyway. We also mean a pool table that hasn’t got any pockets. This makes the game even more unplayable, but the physics even easier. (It makes it, really, like a gas particle in a box. Only without that difficult third dimension to deal with.)

    And that makes it clear. The one ball on a frictionless, pocketless table bouncing around forever maybe we can imagine. A huge number of balls on that frictionless, pocketless table? Possibly trouble. As long as we’re doing imaginary impossible unplayable pool we could pretend the balls don’t collide with each other. Then the distributions of what ways the balls are moving could be equal. If they do bounce off each other, or if they get so numerous they can’t squeeze past one another, well, that’s different.

    An ergodic system lets you do this neat, useful trick. You can look at a single example for a long time. Or you can look at a lot of examples at one time. And they’ll agree in their typical behavior. If one is easier to study than the other, good! Use the one that you can work with. Mathematicians like to do this sort of swapping between equivalent problems a lot.

    The problem is it’s hard to find ergodic systems. We may have a lot of things that look ergodic, that feel like they should be ergodic. But proved ergodic, with a logic that we can’t shake? That’s harder to do. Often in practice we will include a note up top that we are assuming the system to be ergodic. With that “ergodic hypothesis” in mind we carry on with our work. It gives us a handle on a lot of problems that otherwise would be beyond us.

    Advertisements
     
  • Joseph Nebus 6:00 pm on Wednesday, 9 November, 2016 Permalink | Reply
    Tags: , distributions, , , , Josiah Willard Gibbs   

    The End 2016 Mathematics A To Z: Distribution (statistics) 


    As I’ve done before I’m using one of my essays to set up for another essay. It makes a later essay easier. What I want to talk about is worth some paragraphs on its own.

    Distribution (statistics)

    The 19th Century saw the discovery of some unsettling truths about … well, everything, really. If there is an intellectual theme of the 19th Century it’s that everything has an unsettling side. In the 20th Century craziness broke loose. The 19th Century, though, saw great reasons to doubt that we knew what we knew.

    But one of the unsettling truths grew out of mathematical physics. We start out studying physics the way Galileo or Newton might have, with falling balls. Ones that don’t suffer from air resistance. Then we move up to more complicated problems, like balls on a spring. Or two balls bouncing off each other. Maybe one ball, called a “planet”, orbiting another, called a “sun”. Maybe a ball on a lever swinging back and forth. We try a couple simple problems with three balls and find out that’s just too hard. We have to track so much information about the balls, about their positions and momentums, that we can’t solve any problems anymore. Oh, we can do the simplest ones, but we’re helpless against the interesting ones.

    And then we discovered something. By “we” I mean people like James Clerk Maxwell and Josiah Willard Gibbs. And that is that we can know important stuff about how millions and billions and even vaster numbers of things move around. Maxwell could work out how the enormously many chunks of rock and ice that make up Saturn’s rings move. Gibbs could work out how the trillions of trillions of trillions of trillions of particles of gas in a room move. We can’t work out how four particles move. How is it we can work out how a godzillion particles move?

    We do it by letting go. We stop looking for that precision and exactitude and knowledge down to infinitely many decimal points. Even though we think that’s what mathematicians and physicists should have. What we do instead is consider the things we would like to know. Where something is. What its momentum is. What side of a coin is showing after a toss. What card was taken off the top of the deck. What tile was drawn out of the Scrabble bag.

    There are possible results for each of these things we would like to know. Perhaps some of them are quite likely. Perhaps some of them are unlikely. We track how likely each of these outcomes are. This is called the distribution of the values. This can be simple. The distribution for a fairly tossed coin is “heads, 1/2; tails, 1/2”. The distribution for a fairly tossed six-sided die is “1/6 chance of 1; 1/6 chance of 2; 1/6 chance of 3” and so on. It can be more complicated. The distribution for a fairly tossed pair of six-sided die starts out “1/36 chance of 2; 2/36 chance of 3; 3/36 chance of 4” and so on. If we’re measuring something that doesn’t come in nice discrete chunks we have to talk about ranges: the chance that a 30-year-old male weighs between 180 and 185 pounds, or between 185 and 190 pounds. The chance that a particle in the rings of Saturn is moving between 20 and 21 kilometers per second, or between 21 and 22 kilometers per second, and so on.

    We may be unable to describe how a system evolves exactly. But often we’re able to describe how the distribution of its possible values evolves. And the laws by which probability work conspire to work for us here. We can get quite precise predictions for how a whole bunch of things behave even without ever knowing what any thing is doing.

    That’s unsettling to start with. It’s made worse by one of the 19th Century’s late discoveries, that of chaos. That a system can be perfectly deterministic. That you might know what every part of it is doing as precisely as you care to measure. And you’re still unable to predict its long-term behavior. That’s unshakeable too, although statistical techniques will give you an idea of how likely different behaviors are. You can learn the distribution of what is likely, what is unlikely, and how often the outright impossible will happen.

    Distributions follow rules. Of course they do. They’re basically the rules you’d imagine from looking at and thinking about something with a range of values. Something like a chart of how many students got what grades in a class, or how tall the people in a group are, or so on. Each possible outcome turns up some fraction of the time. That fraction’s never less than zero nor greater than 1. Add up all the fractions representing all the times every possible outcome happens and the sum is exactly 1. Something happens, even if we never know just what. But we know how often each outcome will.

    There is something amazing to consider here. We can know and track everything there is to know about a physical problem. But we will be unable to do anything with it, except for the most basic and simple problems. We can choose to relax, to accept that the world is unknown and unknowable in detail. And this makes imaginable all sorts of problems that should be beyond our power. Once we’ve given up on this precision we get precise, exact information about what could happen. We can choose to see it as a moral about the benefits and costs and risks of how tightly we control a situation. It’s a surprising lesson to learn from one’s training in mathematics.

     
  • Joseph Nebus 2:49 pm on Wednesday, 1 July, 2015 Permalink | Reply
    Tags: , distributions, , median, , quintiles, , word counts   

    A Summer 2015 Mathematics A To Z: quintile 


    Quintile.

    Why is there statistics?

    There are many reasons statistics got organized as a field of study mostly in the late 19th and early 20th century. Mostly they reflect wanting to be able to say something about big collections of data. People can only keep track of so much information at once. Even if we could keep track of more information, we’re usually interested in relationships between pieces of data. When there’s enough data there are so many possible relationships that we can’t see what’s interesting.

    One of the things statistics gives us is a way of representing lots of data with fewer numbers. We trust there’ll be few enough numbers we can understand them all simultaneously, and so understand something about the whole data.

    Quintiles are one of the tools we have. They’re a lesser tool, I admit, but that makes them sound more exotic. They’re descriptions of how the values of a set of data are distributed. Distributions are interesting. They tell us what kinds of values are likely and which are rare. They tell us also how variable the data is, or how reliably we are measuring data. These are things we often want to know: what is normal for the thing we’re measuring, and what’s a normal range?

    We get quintiles from imagining the data set placed in ascending order. There’s some value that one-fifth of the data points are smaller than, and four-fifths are greater than. That’s your first quintile. Suppose we had the values 269, 444, 525, 745, and 1284 as our data set. The first quintile would be the arithmetic mean of the 269 and 444, that is, 356.5.

    The second quintile is some value that two-fifths of your data points are smaller than, and that three-fifths are greater than. With that data set we started with that would be the mean of 444 and 525, or 484.5.

    The third quintile is a value that three-fifths of the data set is less than, and two-fifths greater than; in this case, that’s 635.

    And the fourth quintile is a value that four-fifths of the data set is less than, and one-fifth greater than. That’s the mean of 745 and 1284, or 1014.5.

    From looking at the quintiles we can say … well, not much, because this is a silly made-up problem that demonstrates how quintiles are calculated rather instead of why we’d want to do anything with them. At least the numbers come from real data. They’re the word counts of my first five A-to-Z definitions. But the existence of the quintiles at 365.5, 484.5, 635, and 1014.5, along with the minimum and maximum data points at 269 and 1284, tells us something. Mostly that numbers are bunched up in the three and four hundreds, but there could be some weird high numbers. If we had a bigger data set the results would be less obvious.

    If the calculating of quintiles sounds much like the way we work out the median, that’s because it is. The median is the value that half the data is less than, and half the data is greater than. There are other ways of breaking down distributions. The first quartile is the value one-quarter of the data is less than. The second quartile a value two-quarters of the data is less than (so, yes, that’s the median all over again). The third quartile is a value three-quarters of the data is less than.

    Percentiles are another variation on this. The (say) 26th percentile is a value that 26 percent — 26 hundredths — of the data is less than. The 72nd percentile a value greater than 72 percent of the data.

    Are quintiles useful? Well, that’s a loaded question. They are used less than quartiles are. And I’m not sure knowing them is better than looking at a spreadsheet’s plot of the data. A plot of the data with the quintiles, or quartiles if you prefer, drawn in is better than either separately. But these are among the tools we have to tell what data values are likely, and how tightly bunched-up they are.

     
  • Joseph Nebus 8:13 pm on Saturday, 9 May, 2015 Permalink | Reply
    Tags: , distributions, , , growth, , Peanuts, ,   

    Reading the Comics, May 9, 2015: Trapezoid Edition 


    And now I get caught up again, if briefly, to the mathematically-themed comic strips I can find. I’ve dubbed this one the trapezoid edition because one happens to mention the post that will outlive me.

    Todd Clark’s Lola (May 4) is a straightforward joke. Monty’s given his chance of passing mathematics and doesn’t understand the prospect is grim.

    'What number am I thinking of?' '9,618,210.' 'Right!' 'He always thinks of the same number.'

    Joe Martin’s Willy and Ethel for the 4th of May, 2015. The link will likely expire in early June.

    Joe Martin’s Willy and Ethel (May 4) shows an astounding feat of mind-reading, or of luck. How amazing it is to draw a number at random from a range depends on many things. It’s less impressive to pick the right number if there are only three possible answers than it is to pick the right number out of ten million possibilities. When we ask someone to pick a number we usually mean a range of the counting numbers. My experience suggests it’s “one to ten” unless some other range is specified. But the other thing affecting how amazing it is is the distribution. There might be ten million possible responses, but if only a few of them are likely then the feat is much less impressive.

    The distribution of a random number is the interesting thing about it. The number has some value, yes, and we may not know what it is, but we know how likely it is to be any of the possible values. And good mathematics can be done knowing the distribution of a value of something. The whole field of statistical mechanics is an example of that. James Clerk Maxwell, famous for the equations which describe electromagnetism, used such random variables to explain how the rings of Saturn could exist. It isn’t easy to start solving problems with distributions instead of particular values — I’m not sure I’ve seen a good introduction, and I’d be glad to pass one on if someone can suggest it — but the power it offers is amazing.

    (More …)

     
    • sheldonk2014 10:32 pm on Saturday, 9 May, 2015 Permalink | Reply

      I love the Stan Drake strip
      As always Sheldon

      Like

      • Joseph Nebus 3:36 am on Monday, 11 May, 2015 Permalink | Reply

        Glad you like it. I’ve been intrigued by The Heart of Juliet Jones as a great example of the romance/soap-opera strip and for being occasionally very funny in how it hews to the genre conventions.

        Like

    • ivasallay 1:11 am on Sunday, 10 May, 2015 Permalink | Reply

      Thanks for introducing me to that classic strip Skippy.

      Like

      • Joseph Nebus 3:37 am on Monday, 11 May, 2015 Permalink | Reply

        Happy to. It’s one of the underrated gems of 20th century American comics.

        Like

    • elkement 7:51 am on Tuesday, 12 May, 2015 Permalink | Reply

      Yes, the E=mc2 joke hurts a bit – thinking about units ;-)

      Like

    • chattykerry 9:31 pm on Tuesday, 12 May, 2015 Permalink | Reply

      I feel like Penny in the Big Bang Theory when reading your site… Clearly, only the left side of my brain works. :) Thank you for enjoying my guest blog on Jumbled Writer.

      Like

      • Joseph Nebus 4:07 pm on Friday, 15 May, 2015 Permalink | Reply

        Aw, goodness, don’t be hard on yourself. Everyone can do mathematics and ought to feel like they’re welcome to.

        I promise: if something I write seems unclear, tell me. I’ll do my best to be more understandable.

        Liked by 1 person

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: