Recent Updates Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 3:00 pm on Friday, 27 May, 2016 Permalink | Reply
    Tags: , , , , ,   

    Why Stuff Can Orbit, Part 2: Why Stuff Can’t Orbit 


    Previously:

    As I threatened last week, I want to talk some about central forces. They’re forces in which one particle attracts another with a force that depends only on how far apart the two are. Last week’s essay described some of the assumptions behind the model.

    Mostly, we can study two particles interacting as if it were one particle hovering around the origin. The origin is some central reference point. If we’re looking for a circular orbit then we only have to worry about one variable. This would be ‘r’, the radius of the orbit: how far the planet is from the sun it orbits.

    Now, central forces can follow any rule you like. Not in reality, of course. In reality there’s two central forces you ever see. One is gravity (electric attraction or repulsion follows the same rule) and the other is springs. But we can imagine there being others. At the end of this string of essays I hope to show why there’s special things about these gravity and spring-type forces. And by imagining there’s others we can learn something about why we only actually see these.

    So now I’m going to stop talking about forces. I’ll talk about potential energy instead. There’s several reasons for this, but they all come back to this one: energy is easier to deal with. Energy is a scalar, a single number. A force is a vector, which for this kind of physics-based problem is an array of numbers. We have less to deal with if we stick to energy. If we need forces later on we can get them from the energy. We’ll need calculus to do that, but it won’t be the hard parts of calculus.

    The potential energy will be some function. As a central force it’ll depend only on the distance, r, that a particle is from the origin. It’s convenient to have a name for this. So I will use a common name: V(r). V is a common symbol to use for potential energy. U is another. The (r) emphasizes that this is some function which depends on r. V(r) doesn’t commit us to any particular function, not at this point.

    You might ask: why is the potential energy represented with V, or with U? And I don’t really know. Sometimes we’ll use PE to mean potential energy, which is as clear a shorthand name as we could hope for. But a name that’s two letters like that tends to be viewed with suspicion when we have to do calculus work on it. The label looks like the product of P and E, and derivatives of products get tricky. So it’s a less popular label if you know you’re going take the derivative of the potential energy anytime soon. EP can also get used, and the subscript means it doesn’t look like the product of any two things. Still, at least in my experience, U and V are most often used.

    As I say, I don’t know just why it should be them. It might just be that the letters were available when someone wrote a really good physics textbook. If we want to assume there must be some reason behind this letter choice I have seen a plausible guess. Potential energy is used to produce work. Work is W. So potential energy should be a letter close to W. That suggests U and V, both letters that are part of the letter W. (Listen to the name of ‘W’, and remember that until fairly late in the game U and V weren’t clearly distinguished as letters.) But I do not know of manuscript evidence suggesting that’s what anyone every thought. It is at best a maybe useful mnemonic.

    Here’s an advantage that using potential energy will give us: we can postpone using calculus a little. Not for quantitative results. Not for ones that describe exactly where something should orbit. But it’s good for qualitative results. We can answer questions like “is there a circular orbit” and “are there maybe several plausible orbits” just by looking at a picture.

    That picture is a plot of the values of V(r) against r. And that can be anything. I mean it. Take your preferred drawing medium and draw any wiggly curve you like. It can’t loop back or cross itself or something like that, but it can be as smooth or as squiggly as you like. That’s your central-force potential energy V(r).

    Are there any circular orbits for this potential? Calculus gives us the answer, but we don’t need that. For a potential like our V(r), that depend on one variable, we can just look. (We could also do this for a potential that depends on two variables.) Take your V(r). Imagine it’s the sides of a perfectly smooth bowl or track or something. Now imagine dropping a marble or a ball bearing or something nice and smooth on it. Does the marble come to a rest anywhere? That’s your equilibrium. That’s where a circular orbit can happen.

    A generic wiggly shape with a bunch of peaks and troughs.

    Figure 1. A generic yet complicated V(r). Spoiler: I didn’t draw this myself because I figured using Octave was easier than using ArtRage on my iPad.

    We’re using some real-world intuition to skip doing analysis. That’s all right in this case. Newtonian mechanics say that a particle’s momentum changes in the direction of a force felt. If a particle doesn’t change its mass, then that means it accelerates where the force, uh, forces it. And this sort of imaginary bowl or track matches up the potential energy we want to study with a constrained gravitational potential energy.

    My generic V(r) was a ridiculous function. This sort of thing doesn’t happen in the real world. But they might have. Wiggly functions like that were explored in the 19th century by physicists trying to explain chemistry. They hoped complicated potentials would explain why gases expanded when they warmed and contracted when they cooled. The project failed. Atoms follow quantum-mechanics laws that match only vaguely match Newtonian mechanics like this. But just because functions like these don’t happen doesn’t mean we can’t learn something from them.

    We can’t study every possible V(r). Not at once. Not without more advanced mathematics than I want to use right now. What I’d like to do instead is look at one family of V(r) functions. There will be infinitely many different functions here, but they’ll all resemble each other in important ways. If you’ll allow me to introduce two new numbers we can describe them all with a single equation. The new numbers I’ll name C and n. They’re both constants, at least for this problem. They’re some numbers and maybe at some point I’ll care which ones they are, but it doesn’t matter. If you want to pretend that C is another way to write “eight”, go ahead. n … well, you can pretend that’s just another way to write some promising number like “two” for now. I’ll say when I want to be more specific about it.

    The potential energy I want to look at has a form we call a power law, because it’s all about raising a variable to a power. And we only have the one variable, r. So the potential energy looks like this:

    V(r) = C r^n

    There are some values of n that it will turn out are meaningful. If n is equal to 2, then this is the potential energy for two particles connected by a spring. You might complain there are very few things in the world connected to other things by springs. True enough, but a lot of things act as if they were springs. This includes most anything that’s near but pushed away from a stable equilibrium. It’s a potential worth studying.

    If n is equal to -1, then this is the potential energy for two particles attracting each other by gravity or by electric charges. And here there’s an important little point. If the force is attractive, like gravity or like two particles having opposite electric charges, then we need C to be a negative number. If the force is repulsive, like two particles having the same electric charge, then we need C to be a positive number.

    Although n equalling two, and n equalling negative one, are special cases they aren’t the only ones we can imagine. n may be any number, positive or negative. It could be zero, too, but in that case the potential is a flat line and there’s nothing happening there. That’s known as a “free particle”. It’s just something that moves around with no impetus to speed up or slow down or change direction or anything.

    So let me sketch the potentials for positive n, first for a positive C and second for a negative C. Don’t worry about the numbers on either the x- or the y-axes here; they don’t matter. The shape is all we care about right now.

    The curve starts at zero and rises ever upwards as the radius r increases.

    Figure 2. V(r) = C rn for a positive C and a positive n.


    The curve starts at zero and drops ever downwards as the radius r increases.

    Figure 3. V(r) = C rn for a negative C and a positive n.

    Now let me sketch the potentials for a negative n, first for a positive C and second for a negative C.

    The curve starts way high up and keeps dropping, but levelling out, as the radius r increases

    Figure 4. V(r) = C rn for a positive C and a negative n.


    The curve starts way down low and rises, but levelling out, as the radius r increases.

    Figure 5. V(r) = C rn for a negative C and a negative n.

    And now we can look for equilibriums, for circular orbits. If we have a positive n and a positive C, then — well, do the marble-in-a-bowl test. Start from anywhere; the marble rolls down to the origin where it smashes and stops. The only circular orbit is at a radius r of zero.

    With a positive n and a negative C, start from anywhere except a radius r of exactly zero and the marble rolls off to the right, without ever stopping. The only circular orbit is at a radius r of zero.

    With a negative n and a positive C, the marble slides down a hill that gets more shallow but that never levels out. It rolls off getting ever farther from the origin. There’s no circular orbits.

    With a negative n and a negative C, start from anywhere and the marble rolls off to the left. The marble will plummet down that ever-steeper hill. The only circular orbit is at a radius r of zero.

    So for all these cases, with a potential V(r) = C rn, the only possible “orbits” have both particles zero distance apart. Otherwise the orbiting particle smashes right down into the center or races away never to be seen again. Clearly something has gone wrong with this little project.

    If you’ve spotted what’s gone wrong please don’t say what it is right away. I’d like people to ponder it a little before coming back to this next week. That will come, I expect, shortly after the first Theorem Thursday post. If you have any requests for that project, please get them in, the sooner the better.

     
  • Joseph Nebus 3:00 pm on Wednesday, 25 May, 2016 Permalink | Reply
    Tags: , , ,   

    Counting Things 


    I’ve been working on my little thread of posts about sports mathematics. But I’ve also had a rather busy week and I just didn’t have time to finish the next bit of pondering I had regarding baseball scores. Among other things I had the local pinball league’s post-season Split-Flipper Tournament to play in last night. I played lousy, too.

    So I hope I may bring your attention to some interesting posts from Baking And Math. Yenergy started, last week, with a post about the Gauss Circle Problem. Carl Friedrich Gauss you may know as the mathematical genius who proved the Fundamental Theorem of Whatever Subfield Of Mathematics You’re Talking About. Circles are those same old things. The problem is quite old, and easy to understand, and not answered yet. Start with a grid of regularly spaced dots. Draw a circle centered on one of the dots. How many dots are inside the circle?

    Obviously you can count. What we would like is a formula, though: if this is the radius then that function of the radius is the number of points. We don’t have that, remarkably. Yenergy describes some of that, and some ways to estimate the number of points. This is for the circle and for some other shapes.

    Yesterday, Yenergy continued the discussion and got into partitions. Partitions sound boring; they’re about identifying ways to split something up into components. Yet they turn up everywhere. I’m most used to them in statistical mechanics, the study of physics problems where there’s too many things moving to keep track of them all. But it isn’t surprising they turn up in this sort of point-counting problem.

    As a bonus Yenergy links to an article examining a famous story about Gauss. This is specifically the famous story about him, as a child, doing a quite long arithmetic problem at a glance. It’s a story that’s passed into legend and I had not known how much of it was legend.

     
  • Joseph Nebus 3:00 pm on Monday, 23 May, 2016 Permalink | Reply
    Tags: , , , , , , pyramids   

    Reading the Comics, May 17, 2016: Again, No Pictures Edition 


    Last week’s Reading The Comics was a bunch of Gocomics.com strips. And I don’t feel the need to post the images for those, since they’re reasonably stable links. Today’s is also a bunch of Gocomics.com strips. I know how every how-to-bring-in-readers post ever says you should include images. Maybe I will commission someone to do some icons. It couldn’t hurt.

    Someone looking close at the title, with responsible eye protection, might notice it’s dated the 17th, a day this is not. There haven’t been many mathematically-themed comic strips since the 17th is all. And I’m thinking to try out, at least for a while, making the day on which a Reading the Comics post is issued regular. Maybe Monday. This might mean there are some long and some short posts, but being a bit more scheduled might help my writing.

    Mark Anderson’s Andertoons for the 14th is the charting joke for this essay. Also the Mark Anderson joke for this essay. I was all ready to start explaining ways that the entropy of something can decrease. The easiest way is by expending energy, which we can see as just increasing entropy somewhere else in the universe. The one requiring the most patience is simply waiting: entropy almost always increases, or at least doesn’t decrease. But “almost always” isn’t the same as “always”. But I have to pass. I suspect Anderson drew the chart going down because of the sense of entropy being a winding-down of useful stuff. Or because of down having connotations of failure, and the increase of entropy suggesting the failing of the universe. And we can also read this as a further joke: things are falling apart so badly that even entropy isn’t working like it ought. Anderson might not have meant for a joke that sophisticated, but if he wants to say he did I won’t argue it.

    Scott Adams’s Dilbert Classics for the 14th reprinted the comic of the 20th of March, 1993. I admit I do this sort of compulsive “change-simplifying” paying myself. It’s easy to do if you have committed to memory pairs of numbers separated by five: 0 and 5, 1 and 6, 2 and 7, and so on. So if I get a bill for (say) $4.18, I would look for whether I have three cents in change. If I have, have I got 23 cents? That would give me back a nickel. 43 cents would give me back a quarter in change. And a quarter is great because I can use that for pinball.

    Sometimes the person at the cash register doesn’t want a ridiculous bunch of change. I don’t blame them. It’s easy to suppose that someone who’s given you $5.03 for a $4.18 charge misunderstood what the bill was. Some folks will take this as a chance to complain mightily about how kids don’t learn even the basics of mathematics anymore and the world is doomed because the young will follow their job training and let machines that are vastly better at arithmetic than they are do arithmetic. This is probably what Adams was thinking, since, well, look at the clerk’s thought balloon in the final panel.

    But consider this: why would Dilbert have handed over $7.14? Or, specifically, how could he give $7.14 to the clerk but not have been able to give $2.14, which would make things easier on everybody? There’s no combination of bills — in United States or, so far as I’m aware, any major world currency — in which you can give seven dollars but not two dollars. He had to be handing over five dollars he was getting right back. The clerk would be right to suspect this. It looks like the start of a change scam, begun by giving a confusing amount of money.

    Had Adams written it so that the charge was $6.89, and Dilbert “helpfully” gave $12.14, then Dilbert wouldn’t be needlessly confusing things.

    Dave Whamond’s Reality Check for the 15th is that pirate-based find-x joke that feels like it should be going around Facebook, even though I don’t think it has been. I can’t say the combination of jokes quite makes logical sense, but I’m amused. It might be from the Reality Check squirrel in the corner.

    Nate Fakes’s Break of Day for the 16th is the anthropomorphized shapes joke for this essay. It’s not the only shapes joke, though.

    Doug Bratton’s Pop Culture Shock Therapy for the 16th is the Einstein joke for this essay.

    Rick Detorie’s One Big Happy rerun for the 17th is another shapes joke. Ruthie has strong ideas about what distinguishes a pyramid from a triangle. In this context I can’t say she’s wrong to assert what a pyramid is.

     
  • Joseph Nebus 3:00 pm on Friday, 20 May, 2016 Permalink | Reply
    Tags: , , , polar coordinates   

    Why Stuff Can Orbit, Part 1: Laying Some Groundwork 


    My recent talking about central forces got me going. There’s interesting stuff about what central forces allow things to orbit one another, and what forces allow for closed orbits. And I feel like trying out a bit of real mathematics, the kind that physics majors do as undergraduates, around here. I should get something for the student loans I’m still paying off and I’ll accept “showing off on my meager little blog here” as something.

    Central forces are, uh, forces. Pairs of particles attract each other. The strength of the attraction depends on how far apart they are. The direction of the attraction is exactly towards the other in the pair. So it works like gravity or electric attraction. It might follow a different rule, although I know I’m going to casually refer to things as “gravity” or “gravitational” because that’s just too familiar a reference. I’m formally talking about a problem in classical mechanics, but the ideas and approaches come from orbital mechanics. The language of orbital mechanics comes along with it.

    And it is too possible that the force would point some other way. Electric charges in a magnetic field feel a force perpendicular to the magnet. And we can represent vortices, things that swirl around the way cyclones do, as particles pushing each other in perpendicular directions. We’re not going to deal with those.

    The easiest kind of orbit to find is a circular one, made by a single pair of particles. I so want to describe that, but if I do, I’m just going to make things more confusing. It’s an orbit that’s a circle. And we’re sticking to a single pair of particles because it turns out it’s easy to describe the central-force movement of two particles. And it’s kind of impossible to describe the central-force movement three particles. So, let’s stick to two.

    When we start thinking about what we need to describe the system it’s easy to despair. We need the x, y, and z coordinates for two particles. Plus there’s the mass of both particles. Plus there’s some gravitational constant, however strong the force itself is. That’s at least nine things to keep track of.

    We don’t need all that. Physics helps us. Ever hear of the Conservation of Angular Momentum? It’s that thing that makes an ice skater twirling around speed up by pulling in his arms and slow down by reaching them out again. In an argument I’m not dealing with here, the Conservation of Angular Momentum tells us the two particles are going to keep to a single plane. They can move together or apart, but they’ll trace out paths in a two-dimensional slice of space. We can, without loss of generality, suppose it to be the horizontal plane. That is, that the z-coordinate for both planets starts as zero and stays there. So we’re down to seven things to keep track of.

    We can simplify some other stuff. For example, suppose we have one really big mass and one really small one: a sun and a planet, or a planet and a satellite. The sun isn’t going to move very much; the planet hasn’t got enough gravity to matter. We can pretend the sun doesn’t move. We’ll make a little error, but it’ll be small enough we don’t have to care. So we’re down to five things to keep track of.

    And we’ll do better. The strength of the attractive force isn’t going to change because we don’t need a universe that complicated. The mass of the sun and the planet? Well, that could change, if we wanted to work out how rockets behave. We don’t. So their masses are not going to change. So that’s three things whose value we might not have, but which aren’t going to change. We’ll give those numbers labels that will be letters, but there’s nothing to keep track of. They don’t change. We only have to worry about the x- and y-coordinates of the planet.

    But we don’t even have to do that, not really. The force between the sun and the planet depends on how far apart they are. This almost begs us to use polar coordinates instead of Cartesian coordinates. In polar coordinates we identify a point by two things. First is how far it is from the origin. Second is what angle the line from the origin to that point makes with some reference line. And if we’re looking for a circular orbit, then we don’t care what the angle is. It’s going to start at some arbitrary value and increase (or decrease) steadily in time. We don’t have to keep track of it. The only thing that changes that we have to keep track of is the distance between the sun and the planet. Since this is a distance, we naturally call this ‘r’. Well, it’s the radius of the circle traced out by the planet. That’s why it makes sense.

    So we have one thing that changes, r. And we have a couple things whose value we don’t know, but which aren’t going to change during the problem. This is getting manageable. (Later on, when we’ll want to allow for elliptic or other funny-shaped orbits, we’ll need an angle θ. But by then we’ll be so comfortable with one variable we’ll be looking to get the thrill of the challenge back.)

    When I pick this up again I mean to introduce all the kinds of central forces that we might possibly look at. And then how right away we can see there’s no such thing as an orbit. Should be fun.

     
  • Joseph Nebus 3:00 pm on Wednesday, 18 May, 2016 Permalink | Reply
    Tags: , , , ,   

    How Interesting Is A Baseball Score? Some Further Results 


    While researching for my post about the information content of baseball scores I found some tantalizing links. I had wanted to know how often each score came up. From this I could calculate the entropy, the amount of information in the score. That’s the sum, taken over every outcome, of minus one times the frequency of that score times the base-two logarithm of the frequency of the outcome. And I couldn’t find that.

    An article in The Washington Post had a fine lead, though. It offers, per the title, “the score of every basketball, football, and baseball game in league history visualized”. And as promised it gives charts of how often each number of runs has turned up in a game. The most common single-team score in a game is 3, with 4 and 2 almost as common. I’m not sure the date range for these scores. The chart says it includes (and highlights) data from “a century ago”. And as the article was posted in December 2014 it can hardly use data from after that. I can’t imagine that the 2015 season has changed much, though. And whether they start their baseball statistics at either 1871, 1876, 1883, 1891, or 1901 (each a defensible choice) should only change details.

    Frequency (in thousands) of various baseball scores. I think I know what kind of distribution this is and I mean to follow up about that.

    Philip Bump writes for The Washington Post on the scores of all basketball, football, and baseball games in (United States) major league history. Also I have thoughts about what this looks like.

    Which is fine. I can’t get precise frequency data from the chart. The chart offers how many thousands of times a particular score has come up. But there’s not the reference lines to say definitely whether a zero was scored closer to 21,000 or 22,000 times. I will accept a rough estimate, since I can’t do any better.

    I made my best guess at the frequency, from the chart. And then made a second-best guess. My best guess gave the information content of a single team’s score as a touch more than 3.5 bits. My second-best guess gave the information content as a touch less than 3.5 bits. So I feel safe in saying a single team’s score is about three and a half bits of information.

    So the score of a baseball game, with two teams scoring, is probably somewhere around twice that, or about seven bits of information.

    I have to say “around”. This is because the two teams aren’t scoring runs independently of one another. Baseball doesn’t allow for tie games except in rare circumstances. (It would usually be a game interrupted for some reason, and then never finished because the season ended with neither team in a position where winning or losing could affect their standing. I’m not sure that would technically count as a “game” for Major League Baseball statistical purposes. But I could easily see a roster of game scores counting that.) So if one team’s scored three runs in a game, we have the information that the other team almost certainly didn’t score three runs.

    This estimate, though, does fit within my range estimate from 3.76 to 9.25 bits. And as I expected, it’s closer to nine bits than to four bits. The entropy seems to be a bit less than (American) football scores — somewhere around 8.7 bits — and college basketball — probably somewhere around 10.8 bits — which is probably fair. There are a lot of numbers that make for plausible college basketball scores. There are slightly fewer pairs of numbers that make for plausible football scores. There are fewer still pairs of scores that make for plausible baseball scores. So there’s less information conveyed in knowing that the game’s score is.

     
  • Joseph Nebus 3:00 pm on Monday, 16 May, 2016 Permalink | Reply
    Tags: , , , , ,   

    Reading the Comics, May 12, 2016: No Pictures Again Edition 


    I’ve hardly stopped reading the comics. I doubt I could even if I wanted at this point. But all the comics this bunch are from GoComics, which as far as I’m aware doesn’t turn off access to comic strips after a couple of weeks. So I don’t quite feel justified including the images of the comics when you can just click links to them instead.

    It feels a bit barren, I admit. I wonder if I shouldn’t commission some pictures so I have something for visual appeal. There’s people I know who do comics online. They might be able to think of something to go alongside every “Student has snarky answer for a word problem” strip.

    Brian and Ron Boychuk’s The Chuckle Brothers for the 8th of May drops in an absolute zero joke. Absolute zero’s a neat concept. People became aware of it partly by simple extrapolation. Given that the volume of a gas drops as the temperature drops, is there a temperature at which the volume drops to zero? (It’s complicated. But that’s the thread I use to justify pointing out this strip here.) And people also expected there should be an absolute temperature scale because it seemed like we should be able to describe temperature without tying it to a particular method of measuring it. That is, it would be a temperature “absolute” in that it’s not explicitly tied to what’s convenient for Western Europeans in the 19th century to measure. That zero and that instrument-independent temperature idea get conflated, and reasonably so. Hasok Chang’s Inventing Temperature: Measurement and Scientific Progress is well-worth the read for people who want to understand absolute temperature better.

    Gene Weingarten, Dan Weingarten & David Clark’s Barney and Clyde for the 9th is another strip that seems like it might not belong here. While it’s true that accidents sometimes lead to great scientific discoveries, what has that to do with mathematics? And the first thread is that there are mathematical accidents and empirical discoveries. Many of them are computer-assisted. There is something that feels experimental about doing a simulation. Modern chaos theory, the study of deterministic yet unpredictable systems, has at its founding myth Edward Lorentz discovering that tiny changes in a crude weather simulation program mattered almost right away. (By founding myth I don’t mean that it didn’t happen. I just mean it’s become the stuff of mathematics legend.)

    But there are other ways that “accidents” can be useful. Monte Carlo methods are often used to find extreme — maximum or minimum — solutions to complicated systems. These are good if it’s hard to find a best possible answer, but it’s easy to compare whether one solution is better or worse than another. We can get close to the best possible answer by picking an answer at random, and fiddling with it at random. If we improve things, good: keep the change. You can see why this should get us pretty close to a best-possible-answer soon enough. And if we make things worse then … usually but not always do we reject the change. Sometimes we take this “accident”. And that’s because if we only take improvements we might get caught at a local extreme. An even better extreme might be available but only by going down an initially unpromising direction. So it’s worth allowing for some “mistakes”.

    Mark Anderson’s Andertoons for the 10th of Anderson is some wordplay on volume. The volume of boxes is an easy formula to remember and maybe it’s a boring one. It’s enough, though. You can work out the volume of any shape using just the volume of boxes. But you do need integral calculus to tell how to do it. So maybe it’s easier to memorize the formula for volumes of a pyramid and a sphere.

    Berkeley Breathed’s Bloom County for the 10th of May is a rerun from 1981. And it uses a legitimate bit of mathematics for Milo to insult Freida. He calls her a “log 10 times 10 to the derivative of 10,000”. The “log 10” is going to be 1. A reference to logarithm, without a base attached, means either base ten or base e. “log” by itself used to invariably mean base ten, back when logarithms were needed to do ordinary multiplication and division and exponentiation. Now that we have calculators for this mathematicians have started reclaiming “log” to mean the natural logarithm, base e, which is normally written “ln”, but that’s still an eccentric use. Anyway, the logarithm base ten of ten is 1: 10 is equal to 10 to the first power.

    10 to the derivative of 10,000 … well, that’s 10 raised to whatever number “the derivative of 10,000” is. Derivatives take us into calculus. They describe how much a quantity changes as one or more variables change. 10,000 is just a number; it doesn’t change. It’s called a “constant”, in another bit of mathematics lingo that reminds us not all mathematics lingo is hard to understand. Since it doesn’t change, its derivative is zero. As anything else changes, the constant 10,000 does not. So the derivative of 10,000 is zero. 10 to the zeroth power is 1.

    So, one times one is … one. And it’s rather neat that kids Milo’s age understand derivatives well enough to calculate that.

    Ruben Bolling’s Super-Fun-Pak Comix rerun for the 10th happens to have a bit of graph theory in it. One of Uncle Cap’n’s Puzzle Pontoons is a challenge to trace out a figure without retracting a line or lifting your pencil. You can’t, not this figure. One of the first things you learn in graph theory teaches how to tell, and why. And thanks to a Twitter request I’m figuring to describe some of that for the upcoming Theorem Thursdays project. Watch this space!

    Charles Schulz’s Peanuts Begins for the 11th, a rerun from the 6th of February, 1952, is cute enough. It’s one of those jokes about how a problem seems intractable until you’ve found the right way to describe it. I can’t fault Charlie Brown’s thinking here. Figuring out a way the problems are familiar and easy is great.

    Shaenon K Garrity and Jeffrey C Wells’s Skin Horse for the 12th is a “see, we use mathematics in the real world” joke. In this case it’s triangles and triangulation. That’s probably the part of geometry it’s easiest to demonstrate a real-world use for, and that makes me realize I don’t remember mathematics class making use of that. I remember it coming up some, particularly in what must have been science class when we built and launched model rockets. We used a measure of how high an angle the rocket reached, and knowledge of how far the observing station was from the launchpad. But that wasn’t mathematics class for some reason, which is peculiar.

     
  • Joseph Nebus 3:00 pm on Friday, 13 May, 2016 Permalink | Reply
    Tags: , , , , ,   

    How Interesting Is A Baseball Score? Some Partial Results 


    Meanwhile I have the slight ongoing quest to work out the information-theory content of sports scores. For college basketball scores I made up some plausible-looking score distributions and used that. For professional (American) football I found a record of all the score outcomes that’ve happened, and how often. I could use experimental results. And I’ve wanted to do other sports. Soccer was asked for. I haven’t been able to find the scoring data I need for that. Baseball, maybe the supreme example of sports as a way to generate statistics … has been frustrating.

    The raw data is available. Retrosheet.org has logs of pretty much every baseball game, going back to the forming of major leagues in the 1870s. What they don’t have, as best I can figure, is a list of all the times each possible baseball score has turned up. That I could probably work out, when I feel up to writing the scripts necessary, but “work”? Ugh.

    Some people have done the work, although they haven’t shared all the results. I don’t blame them; the full results make for a boring sort of page. “The Most Popular Scores In Baseball History”, at ValueOverReplacementGrit.com, reports the top ten most common scores from 1871 through 2010. The essay also mentions that as of then there were 611 unique final scores. And that lets me give some partial results, if we trust that blogger post from people I never heard of before are accurate and true. I will make that assumption over and over here.

    There’s, in principle, no limit to how many scores are possible. Baseball contains many implied infinities, and it’s not impossible that a game could end, say, 580 to 578. But it seems likely that after 139 seasons of play there can’t be all that many more scores practically achievable.

    Suppose then there are 611 possible baseball score outcomes, and that each of them is equally likely. Then the information-theory content of a score’s outcome is negative one times the logarithm, base two, of 1/611. That’s a number a little bit over nine and a quarter. You could deduce the score for a given game by asking usually nine, sometimes ten, yes-or-no questions from a source that knew the outcome. That’s a little higher than the 8.7 I worked out for football. And it’s a bit less than the 10.8 I estimate for college basketball.

    And there’s obvious rubbish there. In no way are all 611 possible outcomes equally likely. “The Most Popular Scores In Baseball History” says that right there in the essay title. The most common outcome was a score of 3-2, with 4-3 barely less popular. Meanwhile it seems only once, on the 28th of June, 1871, has a baseball game ended with a score of 49-33. Some scores are so rare we can ignore them as possibilities.

    (You may wonder how incompetent baseball players of the 1870s were that a game could get to 49-33. Not so bad as you imagine. But the equipment and conditions they were playing with were unspeakably bad by modern standards. Notably, the playing field couldn’t be counted on to be flat and level and well-mowed. There would be unexpected divots or irregularities. This makes even simple ground balls hard to field. The baseball, instead of being replaced with every batter, would stay in the game. It would get beaten until it was a little smashed shell of unpredictable dynamics and barely any structural integrity. People were playing without gloves. If a game ran long enough, they would play at dusk, without lights, with a muddy ball on a dusty field. And sometimes you just have four innings that get out of control.)

    What’s needed is a guide to what are the common scores and what are the rare scores. And I haven’t found that, nor worked up the energy to make the list myself. But I found some promising partial results. In a September 2008 post on Baseball-Fever.com, user weskelton listed the 24 most common scores and their frequency. This was for games from 1993 to 2008. One might gripe that the list only covers fifteen years. True enough, but if the years are representative that’s fine. And the top scores for the fifteen-year survey look to be pretty much the same as the 139-year tally. The 24 most common scores add up to just over sixty percent of all baseball games, which leaves a lot of scores unaccounted for. I am amazed that about three in five games will have a score that’s one of these 24 choices though.

    But that’s something. We can calculate the information content for the 25 outcomes, one each of the 24 particular scores and one for “other”. This will under-estimate the information content. That’s because “other” is any of 587 possible outcomes that we’re not distinguishing. But if we have a lower bound and an upper bound, then we’ve learned something about what the number we want can actually be. The upper bound is that 9.25, above.

    The information content, the entropy, we calculate from the probability of each outcome. We don’t know what that is. Not really. But we can suppose that the frequency of each outcome is close to its probability. If there’ve been a lot of games played, then the frequency of a score and the probability of a score should be close. At least they’ll be close if games are independent, if the score of one game doesn’t affect another’s. I think that’s close to true. (Some games at the end of pennant races might affect each other: why try so hard to score if you’re already out for the year? But there’s few of them.)

    The entropy then we find by calculating, for each outcome, a product. It’s minus one times the probability of that outcome times the base-two logarithm of the probability of that outcome. Then add up all those products. There’s good reasons for doing it this way and in the college-basketball link above I give some rough explanations of what the reasons are. Or you can just trust that I’m not lying or getting things wrong on purpose.

    So let’s suppose I have calculated this right, using the 24 distinct outcomes and the one “other” outcome. That makes out the information content of a baseball score’s outcome to be a little over 3.76 bits.

    As said, that’s a low estimate. Lumping about two-fifths of all games into the single category “other” drags the entropy down.

    But that gives me a range, at least. A baseball game’s score seems to be somewhere between about 3.76 and 9.25 bits of information. I expect that it’s closer to nine bits than it is to four bits, but will have to do a little more work to make the case for it.

     
    • Bunk Strutts 10:38 pm on Friday, 13 May, 2016 Permalink | Reply

      Unrelated, but it reminded me of a literature class in High School. The teacher gave multiple-choice quizzes every Friday, and I spotted patterns. By mid-semester I’d compiled a list of likely correct answers for each of the questions (i.e, 1. D; 2. B; 3. A, etc.). The pattern was consistent enough that I sold crib sheets that guaranteed a C for those who hadn’t studied. No one ever asked for a refund, and I never read Ethan Fromme.

      Liked by 1 person

      • Joseph Nebus 11:27 pm on Saturday, 14 May, 2016 Permalink | Reply

        I can believe this. It reminds me of the time in Peanuts when Linus figured he could pass a true-or-false test without knowing anything. The thing students don’t realize about multiple choice questions is they are hard to write. The instructor has to come up with a reasonable question, and not just the answer but several plausible alternatives, and then has to scramble where in the choices the answer comes up.

        I remember at least once I gave out a five-question multiple choice section where all the answers were ‘B’, but my dim recollection is that I did that on purpose after I noticed I’d made ‘B’ the right answer the first three times. I think I was wondering if students would chicken out of the idea that all five questions had the same answer. But then I failed to check what the results were and if students really did turn away from the right answer just because it was too neat a pattern.

        Liked by 2 people

        • FlowCoef 1:04 am on Sunday, 15 May, 2016 Permalink | Reply

          Sometime professional MCSA test author here. Writing those things can be a bear, especially getting the distractors right.

          Like

          • Bunk Strutts 3:30 am on Sunday, 15 May, 2016 Permalink | Reply

            My story dates back to the days of mimeograph prints. I never considered the difficulty in generating the tests. In retrospect, we had a very good math department, and some of the teachers would do just what JN said – all answers were “B.” Spooked the hell out of me, and yeah, I punted to the next likely answers.

            The bonus questions were always bizarre. You could miss all the questions, but if you got the bonus you got credit for the whole thing. We were still learning how to factor and cross-multiply when we got this:

            Given: a = 1, b = 2, c = 3 etc.
            [(x-a)(x-b)(x-c) … (x-z)] = ?

            Liked by 1 person

            • Bunk Strutts 3:43 am on Sunday, 15 May, 2016 Permalink | Reply

              Last one. Got a timed geometry quiz, 10 questions. At the top of the quiz were the directions to read through all of the problems before answering. Each of the problems 1 through 9 were impossible to complete in the time allotted, but Number 10 said, “Disregard problems 1 through 9, sign your name at the top of the page and turn it in.”

              Liked by 1 person

              • Joseph Nebus 3:16 am on Monday, 16 May, 2016 Permalink | Reply

                You know, I have a vague memory of getting that sort of quiz myself, back around 1980 or so. It wasn’t in mathematics, although I’m not sure just which class it was. This was elementary school for me so all the classes kind of blended together.

                I suspect there was something in the air at the time, since I remember hearing stories about impossible-quizzes like that with a disregard-all-above-problems notes. And I can’t be sure I haven’t conflated a memory of taking one with the stories of disregard-all-above-problems tests being given.

                Like

            • Joseph Nebus 3:10 am on Monday, 16 May, 2016 Permalink | Reply

              I only barely make it back to the days of mimeograph machines, as a student, although it’s close.

              That bonus question sounds maddening, although its existence makes me suspect there’s a trick I’ll have to poke it with to see.

              Like

          • Joseph Nebus 3:03 am on Monday, 16 May, 2016 Permalink | Reply

            I had interviewed once to write mathematics questions for a standardized test corporation. I didn’t get it, though, and I suspect my weakness in coming up with good distractors was the big problem. I suspect I’d do better now.

            Like

  • Joseph Nebus 3:00 pm on Wednesday, 11 May, 2016 Permalink | Reply
    Tags: requests, theorems   

    Any Requests, Theorem Thursdays Edition? 


    I don’t know just when I’ll have the energy for my next Mathematics A To Z. But I do want to do something. So for June and July I figure to run a Theorem Thursdays bit. Pitch me some theorems and I’ll do my best to explain what they’re about, or why they’re interesting, or how there might be some bit of mathematics-community folklore behind it. That would be the Contraction Mapping Theorem.

    While I’m calling it Theorem Thursdays that’s just for the sake of marketing. It doesn’t literally need to have “theorem” in the thing’s name. The only condition I mean to put on it is that I won’t do Cantor’s Diagonal Argument — the proof that there’s more real numbers than there are integers — because it’s already been done so well, so often, by everyone. I don’t have anything to say that could add to its explanation.

    Please, put your requests in comments here. I shall try to take the first nine that I see and feel like I can be competent to handle by the end of July. And I hope I’m not doing something soon to be disastrous. I may not know exactly what I’m doing, but then, if anyone ever did know exactly what they were doing they’d never do it.

     
  • Joseph Nebus 3:00 pm on Monday, 9 May, 2016 Permalink | Reply
    Tags: , , , fast food, , , ,   

    Reading the Comics, May 6, 2016: Mistakes Edition 


    I knew my readership would drop off after I fell back from daily posting. Apparently it was worse than I imagined and nobody read my little blog here over the weekend. That’s fair enough; I had to tend other things myself. Still, for the purpose of maximizing the number of page views around here, taking two whole days off in a row was a mistake. There’s some more discussed in this Reading The Comics installment.

    Word problems are dull. At least at the primary-school level. There’s all these questions about trains going in different directions or ropes sweeping out areas or water filling troughs. So Aaron McGruder’s Boondocks rerun from the 5th of May (originally run the 22nd of February, 2001) is a cute change. It’s at least the start of a legitimate word problem, based on the ways the recording industry took advantage of artists in the dismal days of fifteen years ago. I’m sure that’s all been fixed by now. Fill in some numbers and the question might interest people.

    Glenn McCoy and Gary McCoy’s The Duplex for the 5th of May is a misunderstanding-fractions joke. I’m amused by the idea of messing up quarter-pound burgers. But it also brings to mind a summer when I worked for the Great Adventure amusement park and got assigned one day as cashier at the Great American Hamburger Stand. Thing is, I didn’t know anything about the stand besides the data point that they probably sold hamburgers. So customers would order stuff I didn’t know, and I couldn’t find how to enter it on the register, and all told it was a horrible mess. If you were stuck in that impossibly slow-moving line, I am sorry, but it was management’s fault; I told them I didn’t know what I was even selling. Also I didn’t know the drink cup sizes so I just charged you for whatever you said and if I gave you the wrong size I hope it was more soda than you needed.

    On a less personal note, I have heard the claim about why one-third-pound burgers failed in United States fast-food places. Several chains tried them out in the past decade and they didn’t last, allegedly because too many customers thought a third of a pound was less than a quarter pound and weren’t going to pay more for less beef. It’s … plausible enough, I suppose, because people have never been good with fractions. But I suspect the problem is more linguistic. A quarter-pounder has a nice rhythm to it. A half-pound burger is a nice strong order to say. A third-pound burger? The words don’t even sound right. You have to say “third-of-a-pound burger” to make it seem like English, and it’s a terribly weak phrase. The fast food places should’ve put their money into naming it something that suggested big-ness but not too-big-to-eat.

    Mark Tatulli’s Heart of the City for the 5th is about Heart’s dread of mathematics. Her expressed fear, that making one little mistake means the entire answer is wrong, is true enough. But how how much is that “enough”? If you add together someting that should be (say) 18, and you make it out to be 20 instead, that is an error. But that’s a different sort of error from adding them together and getting 56 instead.

    And errors propagate. At least they do in real problems, in which you are calculating something because you want to use it for something else. An arithmetic error on one step might grow, possibly quite large, with further steps. That’s trouble. This is known as an “unstable” numerical calculation, in much the way a tin of picric acid dropped from a great height onto a fire is an “unstable” chemical. The error might stay about as large as it started out being, though. And that’s less troublesome. A mistake might stay predictable. The calculation is “stable” In a few blessed cases an error might be minimized by further calculations. You have to arrange the calculations cleverly to make that possible, though. That’s an extremely stable calculation.

    And this is important because we always make errors. At least in any real calculation we do. When we want to turn, say, a formula like πr2 into a number we have to make a mistake. π is not 3.14, nor is it 3.141592, nor is it 3.14159265358979311599796346854418516. Does the error we make by turning π into some numerical approximation matter? It depends what we’re calculating, and how. There’s no escaping error and it might be a comfort to Heart, or any student, to know that much of mathematics is about understanding and managing error.

    The further adventures of Nadine and Nina and Science Friday: 'Does it depress you to know that with the expanding universe and all the countless billions and trillions of other planets, the best-looking men probably aren't even in our galaxy?'

    Joe Martin’s Boffo for the 6th of May, 2016. The link’s already expired, I bet. Yes, the panel did appear on a Sunday.

    Joe Martin’s Boffo for the 6th of May is in its way about the wonder of very large numbers. On some reasonable assumptions — that our experience is typical, that nothing is causing traits to be concentrated one way or another — we can realize that we probably will not see any extreme condition. In this case, it’s about the most handsome men in the universe probably not even being in our galaxy. If the universe is large enough and people common enough in it, that’s probably right. But we likely haven’t got the least handsome either. Lacking reason to suppose otherwise we can guess that we’re in the vast middle.

    David L Hoyt and Jeff Knurek’s Jumble for the 6th of May mentions mathematicians and that’s enough, isn’t it? Without spoiling the puzzle for anyone, I will say that “inocci” certainly ought to be a word meaning something. So get on that, word-makers.

    SMOPT ooo--; ORFPO -o--o; INCOCI o---oo; LAUNAN ooo---. The math teacher was being reprimanded because of his -----------.

    David L Hoyt and Jeff Knurek’s Jumble for the 6th of May, 2016. While ‘ORFPO’ mey not be anything, I believe there should be some company named ‘OrfPro’ that offers some kind of service.

    Dave Blazek’s Loose Parts for the 6th brings some good Venn Diagram humor back to my pages. Good. It’s been too long.

     
  • Joseph Nebus 3:00 pm on Friday, 6 May, 2016 Permalink | Reply
    Tags: , , , lessons, planning,   

    What I Learned Doing The Leap Day 2016 Mathematics A To Z 


    The biggest thing I learned in the recently concluded mathematics glossary is that continued fractions have enthusiasts. I hadn’t intended to cause controversy when I claimed they weren’t much used anymore. The most I have grounds to say is that the United States educational process as I experienced it doesn’t use them for more than a few special purposes. There is a general lesson there. While my experience may be typical, that doesn’t mean everyone’s is like it. There is a mystery to learn from in that.

    The next big thing I learned was the Kullbach-Leibler Divergence. I’m glad to know it now. And I would not have known it, I imagine, if it weren’t for my trying something novel and getting a fine result from it. That was throwing open the A To Z glossary to requests from readers. At least half the terms were ones that someone reading my original call had asked for.

    And that was thrilling. It gave me a greater feeling that I was communicating with specific people than most of the things that I’ve written, is the biggest point. I understand that I have readers, and occasionally chat with some. This was a rare chance to feel engaged, though.

    And getting asked things I hadn’t thought of, or in some cases hadn’t heard of, was great. It foiled the idea of two months’ worth of easy postings, but it made me look up and learn and think about a variety of things. And also to re-think them. My first drafts of the Dedekind Domain and the Kullbach-Leibler divergence essays were completely scrapped, and the Jacobian made it through only with a lot of rewriting. I’ve been inclined to write with few equations and even fewer drawings around here. Part of that’s to be less intimidating. Part of that’s because of laziness. Some stuff is wonderfully easy to express in a sketch, but transferring that to a digital form is the heavy work of getting out the scanner and plugging it in. Or drawing from scratch on my iPad. Cleaning it up is even more work. So better to spend a thousand extra words on the setup.

    But that seemed to work! I’m especially surprised that the Jacobian and the Lagrangian essays seemed to make sense without pictures or equations. Homomorphisms and isomorphisms were only a bit less surprising. I feel like I’ve been writing better thanks to this.

    I do figure on another A To Z for sometime this summer. Perhaps I should open nominations already, and with a better-organized scheme for knocking out letters. Some people were disappointed (I suppose) by picking letters that had already got assigned. And I could certainly use time and help finding more x- and y-words. Q isn’t an easy one either.

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
Follow

Get every new post delivered to your Inbox.

Join 801 other followers

%d bloggers like this: