What’s Going On In The Old Universe


Last time in this infinitely-old universe puzzle, we found that by making a universe of only three kinds of atoms (hydrogen, iron, and uranium) which shifted to one another with fixed chances over the course of time, we’d end up with the same distribution of atoms regardless of what the distribution of hydrogen, iron, and uranium was to start with. That seems like it might require explanation.

(For people who want to join us late without re-reading: I got to wondering what the universe might look like if it just ran on forever, stars fusing lighter elements into heavier ones, heavier elements fissioning into lighter ones. So I looked at a toy model where there were three kinds of atoms, dubbed hydrogen for the lighter elements, iron for the middle, and uranium for the heaviest, and made up some numbers saying how likely hydrogen was to be turned into heavier atoms over the course of a billion years, how likely iron was to be turned into something heavier or lighter, and how likely uranium was to be turned into lighter atoms. And sure enough, if the rates of change stay constant, then the universe goes from any initial distribution of atoms to a single, unchanging-ever-after mix in surprisingly little time, considering it’s got a literal eternity to putter around.)

The first question, it seems, is whether I happened to pick a freak set of numbers for the change of one kind of atom to another. It’d be a stroke of luck, but, these things happen. In my first model, I gave hydrogen a 25 percent chance of turning to iron, and no chance of turning to helium, in a billion years. Let’s change that so any given atom has a 20 percent chance of turning to iron and a 20 percent chance of turning to uranium. Similarly, instead of iron having no chance of turning to hydrogen and a 40 percent chance of turning to uranium, let’s try giving each iron atom a 25 percent chance of becoming hydrogen and a 25 percent chance of becoming uranium. Uranium, first time around, had a 40 percent chance of becoming hydrogen and a 40 percent chance of becoming iron. Let me change that to a 60 percent chance of becoming hydrogen and a 20 percent chance of becoming iron.

With these chances of changing, a universe that starts out purely hydrogen settles on being about 50 percent hydrogen, a little over 28 percent iron, and a little over 21 percent uranium in about ten billion years. If the universe starts out with equal amounts of hydrogen, iron, and uranium, however, it settles over the course of eight billion years to … 50 percent hydrogen, a little over 28 percent iron, and a little over 21 percent uranium. If it starts out with no hydrogen and the rest of matter evenly split between iron and uranium, then over the course of twelve billion years it gets to … 50 percent hydrogen, a litte over 28 percent iron, and a little over 21 percent uranium.

Perhaps the problem is that I’m picking these numbers, and I’m biased towards things that are pretty nice ones — halves and thirds and two-fifths and the like — and maybe that’s causing this state where the universe settles down very quickly and stops changing any. We should at least try that before supposing there’s necessarily something more than coincidence going on here.

So I set the random number generator to produce some element changes which can’t be susceptible to my bias for simple numbers. Give hydrogen a 44.5385 percent chance of staying hydrogen, a 10.4071 percent chance of becoming iron, and a 45.0544 percent chance of becoming uranium. Give iron a 25.2174 percent chance of becoming hydrogen, a 32.0355 percent chance of staying iron, and a 42.7471 percent chance of becoming uranium. Give uranium a 2.9792 percent chance of becoming hydrogen, a 48.9201 percent chance of becoming iron, and a 48.1007 percent chance of staying uranium. (Clearly, by the way, I’ve given up on picking numbers that might reflect some actual if simple version of nucleosynthesis and I’m just picking numbers for numbers’ sake. That’s all right; the question this essay is, are we stuck getting an unchanging yet infinitely old universe?)

And the same thing happens again: after nine billion years a universe starting from pure hydrogen will be about 18.7 percent hydrogen, about 35.7 percent iron, and about 45.6 percent uranium. Starting from no hydrogen, 50 percent iron, and 50 percent uranium, we get to the same distribution in again about nine billion years. A universe beginning with equal amounts hydrogen, iron, and uranium under these rules gets to the same distribution after only seven billion years.

The conclusion is this settling down doesn’t seem to be caused by picking numbers that are too particularly nice-looking or obviously attractive; and the distributions don’t seem to have an obvious link to what the probabilities of changing are. There seems to be something happening here, though admittedly we haven’t proven that rigorously. To spoil a successor article in this thread: there is something here, and it’s a big thing.

(Also, no, we’re not stuck with an unchanging universe, and good on you if you can see ways to keep the universe changing without, like, having the probability of one atom changing to another itself vary in time.)

In A Really Old Universe


So, my thinking about an “Olbers Universe” infinitely old and infinitely large in extent brought me to a depressing conclusion that such a universe has to be empty, or at least just about perfectly empty. But we can still ponder variations on the idea and see if that turns up anything. For example, what if we have a universe that’s infinitely old, but not infinite in extent, either because space is limited or because all the matter in the universe is in one neighborhood?

Suppose we have stars. Stars do many things. One of them is they turn elements into other elements, mostly heavier ones. For the lightest of atoms — hydrogen and helium, for example — stars create heavier elements by fusing them together. Making the heavier atoms from these lighter ones, in the net, releases energy, which is why fusion is constantly thought of as a potential power source. And that’s good for making elements up to as heavy as iron. After that point, fusion becomes a net energy drain. But heavier elements still get made as the star dies: when it can’t produce energy by fusion anymore the great mass of the star collapses on itself and that shoves together atom nucleuses regardless of the fact this soaks up more energy. (Warning: the previous description of nucleosynthesis, as it’s called, was very simplified and a little anthropomorphized, and wasn’t seriously cross-checked against developments in the field since I was a physics-and-astronomy undergraduate. Do not use it to attempt to pass your astrophysics qualifier. It’s good enough for everyday use, what with how little responsibility most of us have for stars.)

The important thing to me is that a star begins as a ball of dust, produced by earlier stars (and in our, finite universe, from the Big Bang, which produced a lot of hydrogen and helium and some of the other lightest elements), that condenses into a star, turns many of the elements into it into other elements, and then returns to a cloud of dust that mixes with other dust clouds and forms new stars.

Now. Over time, over the generations of stars, we tend to get heavier elements out of the mix. That’s pretty near straightforward mathematics: if you have nothing but hydrogen and helium — atoms that have one or two protons in the nucleus — it’s quite a trick to fuse them together into something with more than two, three, or four protons in the nucleus. If you have hydrogen, helium, lithium, and beryllium to work with — one, two, three, and four protons in the nucleus — it’s easier to get products of from two up to eight protons in the nucleus. And so on. The tendency is for each generation of stars to have relatively less hydrogen and helium and relatively more of the heavier atoms in its makeup.

So what happens if you have infinitely many generations? The first guess would be, well, stars will keep gathering together and fusing together as long as there are any elements lighter than iron, so that eventually there’d be a time when there were no (or at least no significant) amounts of elements lighter than iron, at which point the stars cease to shine. There’s nothing more to fuse together to release energy and we have a universe of iron- and heavier-metal ex-stars. I’m not sure if this is an even more depressing infinite universe than the infinitely large, infinitely old one which couldn’t have anything at all in it.

Except that this isn’t the whole story. Heavier elements than iron can release energy by fission, splitting into two or more lighter elements. Uranium and radium and a couple other elements are famous for them, but I believe every element has at least some radioactive isotopes. Popular forms of fission will produce alpha particles, which is what they named this particular type of radioactive product before they realized it was just the nucleus of a helium atom. Other types of radioactive decay will produce neutrons, which, if they’re not in the nucleus of an atom, will last an average of about fifteen minutes before decaying into a proton — a hydrogen nucleus — and some other stuff. Some more exotic forms of radioactive decay can produce protons by themselves, too. I haven’t checked the entire set of possible fission byproducts but I wouldn’t be surprised if most of the lighter elements can be formed by something’s breaking down.

In short, even if we fused the entire contents of the universe into atoms heavier than iron, we would still get out a certain amount of hydrogen and of helium, and also of other lighter elements. In short, stars turn hydrogen and helium, eventually, into very heavy elements; but the very heavy elements turn at least part of themselves back into hydrogen and helium.

So, it seems plausible, at least qualitatively, that given enough time to work there’d be a stable condition: hydrogen and helium being turned into heavier atoms at the same rate that heavier atoms are producing hydrogen and helium in their radioactive decay. And an infinitely old universe has enough time for anything.

And that’s, to me, anyway, an interesting question: what would the distribution of elements look like in an infinitely old universe?

(I should point out here that I don’t know. I would be surprised if someone in the astrophysics community has worked it out, at least in a rough form, for an as-authentic-as-possible set of assumptions about how nucleosynthesis works. But I am so ignorant of the literature I’m not sure how to even find the answer they’ve posed. I can think of it as a mathematical puzzle at least, though.)

In A Really Big Universe


I’d got to thinking idly about Olbers’ Paradox, the classic question of why the night sky is dark. It’s named for Heinrich Wilhelm Olbers, 1758-1840, who of course was not the first person to pose the problem nor to give a convincing answer to it, but, that’s the way naming rights go.

It doesn’t sound like much of a question at first, after all, it’s night. But if we suppose the universe is infinitely large and is infinitely old, then, along the path of any direction you look in the sky, day or night, there’ll be a star. The star may be very far away, so that it’s very faint; but it takes up much less of the sky from being so far away. The result is that the star’s intensity, as a function of how much of the sky it takes up, is no smaller. And there’ll be stars shining just as intensely in directions that are near to that first star. The sky in an infinitely large, infinitely old universe should be a wall of stars.

Oh, some stars will be dimmer than average, and some brighter, but that doesn’t matter much. We can suppose the average star is of average brightness and average size for reasons that are right there in the name of the thing; it makes the reasoning a little simpler and doesn’t change the result.

The reason there is darkness is that our universe is neither infinitely large nor infinitely old. There aren’t enough stars to fill the sky and there’s not enough time for the light from all of them to get to us.

But we can still imagine standing on a planet in such an Olbers Universe (to save myself writing “infinitely large and infinitely old” too many times), with enough vastness and enough time to have a night sky that looks like a shell of starlight, and that’s what I was pondering. What might we see if you looked at the sky, in these conditions?

Well, light, obviously; we can imagine the sky looking as bright as the sun, but in all directions above the horizon. The sun takes up a very tiny piece of the sky — it’s about as wide across as your thumb, held at arm’s length, and try it if you don’t believe me (better, try it with the Moon, which is about the same size as the Sun and easier to look at) — so, multiply that brightness by the difference between your thumb and the sky and imagine the investment in sunglasses this requires.

It’s worse than that, though. Yes, in any direction you look there’ll be a star, but if you imagine going on in that direction there’ll be another star, eventually. And another one past that, and another past that yet. And the light — the energy — of those stars shining doesn’t disappear because there’s a star between it and the viewer. The heat will just go into warming up the stars in its path and get radiated through.

This is why interstellar dust, or planets, or other non-radiating bodies doesn’t answer why the sky could be dark in a vast enough universe. Anything that gets enough heat put into it will start to glow and start to shine from that light. The stars will slow down the waves of heat from the stars behind them, but given enough time, it will get through, and in an infinitely old universe, there is enough time.

The conclusion, then, is that our planet in an Olbers Universe would get an infinite amount of heat pouring onto it, at all times. It’s hard to see how life could possibly exist in the circumstance; water would boil away — rock would boil away — and the planet just would evaporate into dust.

Things get worse, though: it’s not just our planet that would get boiled away like this, but as far as I can tell, the stars too. Each star would be getting an infinite amount of heat pouring into it. It seems to me this requires the matter making up the stars to get so hot it would boil away, just as the atmosphere and water and surface of the imagined planet would, until the star — until all stars — disintegrate. At this point I have to think of the great super-science space-opera writers of the early 20th century, listening to the description of a wave of heat that boils away a star, and sniffing, “Amateurs. Come back when you can boil a galaxy instead”. Well, the galaxy would boil too, for the same reasons.

Even once the stars have managed to destroy themselves, though, the remaining atoms would still have a temperature, and would still radiate faint light. And that faint light, multiplied by the infinitely many atoms and all the time they have, would still accumulate to an infinitely great heat. I don’t know how hot you have to get to boil a proton into nothingness — or a quark — but if there is any temperature that does it, it’d be able to.

So the result, I had to conclude, is that an infinitely large, infinitely old universe could exist only if it didn’t have anything in it, or at least if it had nothing that wasn’t at absolute zero in it. This seems like a pretty dismal result and left me looking pretty moody for a while, even I was sure that EE “Doc” Smith would smile at me for working out the heat-death of quarks.

Of course, there’s no reason that a universe has to, or even should, be pleasing to imagine. And there is a little thread of hope for life, or at least existence, in a Olbers Universe.

All the destruction-of-everything comes about from the infinitely large number of stars, or other radiating bodies, in the universe. If there’s only finitely much matter in the universe, then, their total energy doesn’t have to add up to the point of self-destruction. This means giving up an assumption that was slipped into my Olbers Universe without anyone noticing: the idea that it’s about uniformly distributed. If you compare any two volumes of equal size, from any time, they have about the same number of stars in them. This is known in cosmology as “isotropy”.

Our universe seems to have this isotropy. Oh, there are spots where you can find many stars (like the center of a galaxy) and spots where there are few (like, the space in-between galaxies), but the galaxies themselves seem to be pretty uniformly distributed.

But an imagined universe doesn’t have to have this property. If we suppose an Olbers Universe without then we can have stars and planets and maybe even life. It could even have many times the mass, the number of stars and everything, that our universe has, spread across something much bigger than our universe. But it does mean that this infinitely large, infinitely old universe will have all its matter clumped together into some section, and nearly all the space — in a universe with an incredible amount of space — will be empty.

I suppose that’s better than a universe with nothing at all, but somehow only a little better. Even though it could be a universe with more stars and more space occupied than our universe has, that infinitely vast emptiness still haunts me.

(I’d like to note, by the way, that all this universe-building and reasoning hasn’t required any equations or anything like that. One could argue this has diverted from mathematics and cosmology into philosophy, and I wouldn’t dispute that, but can imagine philosophers might.)

Reading the Comics, June 16, 2014: Cleaning Out Before Summer, I Guess, Edition


I had thought the folks at Comic Strip Master Command got most of their mathematics-themed comics cleaned out ahead of the end of the school year (United States time zones) by last week, and then over the course of the weekend they went and published about a hundred million of them, so let me try catching up on that before the long dry spell of summer sets in. (And yet none of them mentioned monkeys writing Shakespeare; go figure.) I’m kind of expecting an all-mathematics-strips series tomorrow morning.

Jason Chatfield’s Ginger Meggs (June 12) puns a bit on negative numbers as also meaning downbeat or pessimistic ones. Negative numbers tend to make people uneasy, when they’re first encountered. It took western mathematics several centuries to be quite fully comfortable with them and that even with the good example of debts serving as a mental model of what negative numbers might mean. Descartes, for example, apparently used four separate quadrants, giving points their positions to the right and up, to the left and up, to the left and down, or to the right and down, from the origin point, rather than deal with negative numbers; and the Fahrenheit temperature scale was pretty much designed around the constraint that Daniel Fahrenheit shouldn’t have to deal with negative numbers in measuring the temperature in his hometown of the Netherlands. I have seen references to Immanuel Kant writing about the theoretical foundation of negative numbers, but not a clear explanation of just what he did, alas. And skepticism of exotic number constructs would last; they’re not called imaginary numbers because people appreciated the imaginative leaps that working with the square roots of negative numbers inspired.

Steve Breen and Mike Thompson’s Grand Avenue (June 12) served notice that, just like last summer, Grandma is going to make sure the kids experience mathematics as a series of chores they have to endure through an otherwise pleasant summer break.

Mike Twohy’s That’s Life (June 12) might be a marginal inclusion here, but it does refer to a lab mouse that’s gone from merely counting food pellets to cost-averaging them. The mathematics abilities of animals are pretty amazing things, certainly, and I’d also be impressed by an animal that was so skilled in abstract mathematics that it was aware “how much does a thing cost?” is a pretty tricky question when you look hard at it.

Jim Scancarelli’s Gasoline Alley (June 13) features a punch line that’s familiar to me — it’s what you get by putting a parrot and the subject of geometry together — although the setup seems clumsy to me. I think that’s because the kid has to bring up geometry out of nowhere in the first panel. Usually the setup as I see it is more along the lines of “what geometric figure is drawn by a parrot that then leaves the room”, which I suppose also brings geometry up out of nowhere to start off, really. I guess the setup feels clumsy to me because I’m trying to imagine the dialogue as following right after the previous day’s, so the flow of the conversation feels odd.

Eric the Circle (June 14), this one signed “andel”, riffs on the popular bit of mathematics trivia that in a randomly selected group of 22 people there’s about a fifty percent chance that some pair of them will share a birthday; that there’s a coincidental use for 22 in estimating π is, believe it or not, something I hadn’t noticed before.

Pab Sungenis’s New Adventures of Queen Victoria (June 14) plays with infinities, and whether the phrase “forever and a day” could actually mean anything, or at least anything more than “forever” does. This requires having a clear idea what you mean by “forever” and, for that matter, by “more”. Normally we compare infinitely large sets by working out whether it’s possible to form pairs which match one element of the first set to one element of the second, and seeing whether elements from either set have to be left out. That sort of work lets us realize that there are just as many prime numbers as there are counting numbers, and just as many counting numbers as there are rational numbers (positive and negative), but that there are more irrational numbers than there are rational numbers. And, yes, “forever and a day” would be the same length of time as “forever”, but I suppose the Innamorati (I tried to find his character’s name, but I can’t, so, Pab Sungenis can come in and correct me) wouldn’t do very well if he promised love for the “power set of forever”, which would be a bigger infinity than “forever”.

Mark Anderson’s Andertoons (June 15) is actually roughly the same joke as the Ginger Meggs from the 12th, students mourning their grades with what’s really a correct and appropriate use of mathematics-mentioning terminology.

Keith Knight’s The Knight Life (June 16) introduces a “personal statistician”, which is probably inspired by the measuring of just everything possible that modern sports has gotten around to doing. But the notion of keeping track of just what one is doing, and how effectively, is old and, at least in principle, sensible. It’s implicit in budgeting (time, money, or other resources) that you are going to study what you do, and what you want to do, and what’s required by what you want to do, and what you can do. And careful tracking of what one’s doing leads to what’s got to be a version of the paradox of Achilles and the tortoise, in which the time (and money) spent on recording the fact of one’s recordings starts to spin out of control. I’m looking forward to that. Don’t read the comments.

Max Garcia’s Sunny Street (June 16) shows what happens when anthropomorphized numerals don’t appear in Scott Hilburn’s The Argyle Sweater for too long a time.

A Wonder of Rationality


I’d like to talk about a neat little property of the rational numbers, which does involve there being infinitely many of them, and which isn’t about how there are just as many rational numbers as there are integers but there are more real numbers than there are rational numbers. (It’s true, but the point has already been well-covered by every mathematics blog ever.) Anyway, I’m laying the groundwork for something else.

Now, it’s common in mathematics to talk about the set of rational numbers, the numbers you get as one integer divided by another, as Q. The notation seems to trace back to the 1930s and the Bourbaki group which did so much to put mathematics on a basis of set theory, and the Q was chosen as it’s the start of “quotient”, which rational numbers after all are. (“R” was already called on to stand for the set of Real numbers.) I’m interested in two subsets of the rational numbers, the first of them, all the positive integers. For that I’ll write Q+. The other is just the rational numbers between zero and one. For that I’ll write Q(0, 1).

I can match every rational number between 0 and 1 to some rational number greater than zero. Here’s one way (there are many ways) to do it. Start out with some number, let me call it q, that’s in Q(0, 1). That’s a rational number between zero and one. Well, let me take its reciprocal: the result of one divided by q, which is going to be some rational number greater than 1. That’s a nice matching of the rational numbers between zero and one to the rational numbers greater than one, but I claimed I’d do this matching for rational numbers greater than zero. No matter; I can get there easily. Take that reciprocal and subtract one from it. This new number — let me call it p — is a rational number greater than zero, something in Q+. That is, each q (a rational between 0 and 1) can be matched with p (a positive rational), among other ways, by letting p equal (1/q) minus 1.

For example, let’s say, let q be 3/4. Then the reciprocal of that is 4/3, and subtracting one from that gets us a p of 1/3, which is certainly a positive number.

Or let’s say that q is 2/9. Then the reciprocal of q is 9/2, and subtracting one from that gets us a p of 7/2. (Some math teachers would want to change that 9/2 into 4 ½, and that 7/2 into 3 ½, but I don’t really know why they bother. I suppose the teachers are having fun and it’s quite easy to test, so, let them.)

If we start with a q of 3/32, then we go to its reciprocal, 32/3, and subtract one from that for a p of 29/3.

And I can run it the other way, too. Pick some rational number p, anything that’s positive. Add one to it, which will make it a rational number greater than 1. Take the reciprocal of this, and you have a rational number between 0 and 1. That is, p (a positive rational) can be matched with q (a rational between 0 and 1) by (again, among other ways) letting q equal 1/(p + 1).

For example, let’s let p be 3/5. Add one to that and we have 8/5, and the reciprocal of that is our q, 5/8, which is a rational number between zero and one.

Or let p be 14. Add one to that and we have 15, and the reciprocal of that is our q, 1/15, which is again between zero and one.

Or say that p is 39/7. Add one to that and we have 46/7, and the reciprocal of that is q, 7/46.

There are many ways to do this sort of matching. For example, you can match the rationals between 0 and 1 to the rationals between -1 and 1, or for that matter to all the rationals, positive and negative. It doesn’t have to be with a single rule, either; you’re allowed to set up a rule like “if q is less than one-half, find p by this rule; if q is greater than one-half, find p by that rule; if q is exactly one-half, do this other thing instead”. You can have a good bit of mental exercise by picking sets and trying to work out rules that match the numbers in one to the numbers in the other, and if I were smart I might try making a weekly puzzle section for that.

A reasonable person may point out that it’s absurd that you can match Q(0, 1) exactly to Q+. The rules I worked out give you one and only one p for each q, and vice-versa; but, the rationals between zero and one are all also positive rational numbers. That you can match the positive rational numbers to a subset of the positive rational numbers is counter-intuitive, at least when you first encounter it. It’s also the simplest definition for being “infinitely large” that I know of, though; if you can set up a one-to-one match of a set with a proper subset of itself, the set is considered to have an infinitely large cardinality, which is one of the ways mathematicians describe the sizes of things.

Reading the Comics, February 11, 2014: Running Out Pi Edition


I’d figured I had enough mathematics comic strips for another of these entries, and discovered during the writing that I had much more to say about one than I had anticipated. So, although it’s no longer quite the 11th, or close to it, I’m going to exile the comics from after that date to the next of these entries.

Melissa DeJesus and Ed Power’s My Cage (February 6, rerun) makes another reference to the infinite-monkeys-with-typewriters scenario, which, since it takes place in a furry universe allows access to the punchline you might expect. I’ve written about that before, as the infinite monkeys problem sits at a wonderful intersection of important mathematics and captivating metaphors.

Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde (starting February 10) (and when am I going to make a macro for that credit and title?) has Cynthia given a slightly baffling homework lesson: to calculate the first ten digits of pi. The story continues through the 11th, the 12th, the 13th, finally resolving on the the 14th, in the way such stories must. I admit I’m not sure why exactly calculating the digits of π would be a suitable homework assignment; I can see working out division problems until the numbers start repeating, or doing a square root or something by hand until you’ve found enough digits.

π, though … well, there’s the question of why it’d be an assignment to start with, but also, what formula for generating π could be plausibly appropriate for an elementary school class. The one that seems obvious to me — π is equal to four times (1/1 minus 1/3 plus 1/5 minus 1/7 plus 1/9 minus 1/11 and so on and so on) — also takes way too long to work. If a little bit of coding is right, it takes something like 160 terms to get just the first two digits of π correct and that isn’t even stable. (The first 160 terms add to 3.135; the first 161 terms to 3.147.) Getting it to ten digits would take —

Well, I thought it might be as few was 10,000 terms, because it turns out the sum of the first ten thousand terms in that series is 3.1414926536, which looks dead-on until you notice that π is 3.1415926536. That’s a neat coincidence, though.

Anyway, obviously, that formula wouldn’t do, and we see on the strip of the 14th that Lucretia isn’t using that. There are a great many formulas that generate the value of π, any of which might be used for a project like this; some of them get the digits right quite rapidly, usually at a cost of being very complicated. The formula shown in the strip of the 14th, though, doesn’t seem to be right. Lucretia’s work uses the formula \pi = \sqrt{12} \cdot \sum_{k = 0}^{\infty} \frac{(-3)^{-k}}{2k + 1} , which takes only about 21 terms to get to the demanded ten digits of accuracy. I don’t want to guess how many pages of work it would take to get to 13,908 places.

If I don’t miss my guess the formula used here is one by Abraham Sharp, an astronomer and mathematician who worked for the Royal Observatory at Greenwich and set a record by calculating π to 72 decimal digits. He was also an instrument-maker, of rather some skill, and I found a page purporting to show his notes of how to cut some complicated polyhedrons out of a block of wood, so, if my father wants to carve a 120-sided figure, here’s his chance. Sharp seems to have started with Leibniz’s formula (yes, that Leibniz) — that the arctangent of a number x is equal to x minus one-third x cubed plus one-fifth x to the fifth power minus one-seventh x to the seventh power, et cetera — with the knowledge that the arctangent of the square root of one-third is equal to one-sixth π and produced this series that looks a lot like the one we started with, but which gets digits correct so very much more quickly.

Darrin Bell’s Candorville (February 13) is primarily a bit of guys insulting friends, but what do you know and π makes a cameo appearance here.

Shannon Wheeler’s Too Much Coffee Man (February 10) is a Venn Diagram cartoon in the service of arguing that Venn Diagram cartoons aren’t funny. Putting aside the smoke and sparks popping out of the Nomad space probe which Kirk and Spock are rushing to the transporter room, I don’t think it’s quite fair: the ease the Venn diagram gives to grouping together concepts and showing how they relate helps organize one’s understanding of concepts and can be a really efficient way to set up a joke. Granting that, perhaps Wheeler’s seen too many Venn Diagram cartoons that fail, a complaint I’m sympathetic to.

Bill Amend’s FoxTrot (February 11, rerun) was one of those strips trying to be taped to the math teacher’s door, with the pun-based programming for the Math Channel.

Reading the Comics, December 12, 2013


It’s a bit of a shame there weren’t quite enough comics to run my little roundup on the 11th of December, for that nice 11/12/13 sequence, but I’m not in charge of arranging these things. For this week’s gathering of mathematically themed comic strips there’s not any deeper theme than they mention mathematic points, but at least the first couple of them have some real meat to the subject matter. (It feels to me like if one of the gathered comics inspires an essay, it’s usually one of the first couple in a collection. That might indicate that I get tired while writing these out, or it might reflect a biased recollection of when I do break out an essay.)

John Allen’s Nest Heads (December 5) is built around a kid not understanding a probability distribution: how many days in a row does it take to get the chance of snow to be 100 percent? The big flaw here is the supposition that the chance of snow is a (uhm) cumulative thing, so that if the snow didn’t happen yesterday or the day before it’s the more likely to happen today or tomorrow. As we actually use weather forecasts, though, they’re … well, I’m not sure I’d say they’re independent, that yesterday’s 30 percent chance of snow has nothing to do with today’s 25 percent chance, since it seems to me plausible that whether it snowed yesterday affects whether it snows today. But they don’t just add up until we get a 100 percent chance of snow when things start to drop.

Continue reading “Reading the Comics, December 12, 2013”

Reading the Comics, November 13, 2013


For this week’s round of comic strips there’s almost a subtler theme than “they mention math in some way”: several have got links to statistical mechanics and the problem of recurrence. I’m not sure what’s gotten into Comic Strip Master Command that they sent out instructions to do comics that I can tie to the astounding interactions of infinities and improbable events, but it makes me wonder if I need to write a few essays about it.

Gene Weingarten, Dan Weingarten, and David Clark’s Barney and Clyde (October 30) summons the classic “infinite monkeys” problem of probability for its punch line. The concept — that if you had something producing strings of letters at random (taken to be monkeys because, I suppose, it’s assumed they would hit every key without knowing what sensibly comes next), it would, given enough time, produce any given result. The idea goes back a long way, and it’s blessed with a compelling mental image even though typewriters are a touch old-fashioned these days.

It seems to have gotten its canonical formulation in Émile Borel’s 1913 article “Statistical Mechanics and Irreversibility”, as you might expect since statistical mechanics brings up the curious problem of entropy. In short: every physical interaction, say, when two gases — let’s say clear air and some pink smoke as 1960s TV shows used to knock characters out — mix, is time-reversible. Look at the interaction of one clear-gas molecule and one pink-gas molecule and you can’t tell whether it’s playing forward or backward. But look at the entire room and it’s obvious whether they’re mixing or unmixing. How can something be time-reversible at every step of every interaction but not in whole?

The idea got a second compelling metaphor with Jorge Luis Borges’s Library of Babel, with a bit more literary class and in many printings fewer monkeys.

Continue reading “Reading the Comics, November 13, 2013”

The Rare Days


The subject doesn’t quite feel right for my occasional roundups of mathematics-themed comic strips, but I noticed this month that the bit about “what is so rare as a day in June?” is coming up … well, twice, so it’s silly to call that “a lot” just yet, but it’s coming up at all. First was back on June 10th, with Jef Mallet’s Frazz (which actually enlightened me as I didn’t know where the line came from, and yes, it’s the Lowell family that also produced Percival), and then John Rose’s Barney Google and Snuffy Smith repeated the question on the 13th.

The question made me immediately think of an installment of Walt Kelly’s Pogo, where Pogo (I believe) asked the question and Porky Pine immediately answered “any day in February”. But it got me wondering whether the question could be answered more subtly, that is, more counter-intuitively.

Continue reading “The Rare Days”

How Big Is This Number?


I mentioned in a throwway bit in the article on Goldbach’s Odd Conjecture being (apparently) proven that the number 3^{3^{15}} had been a bound in the conjecture. That is, it was proven in 1939 that numbers larger than that had to obey the conjecture, but that it was unproven for numbers smaller than that. I described it as a number that tekes something like seven million digits to write out in full, that is, in a decimal expansion rather than some powers-of-powers sort of thing.

So let me give it a little attention as a puzzle for people who want to pass a little time doing arithmetic. Am I right to say that 3^{3^{15}} would be a number with about seven million digits?

The obvious way to check is to see what Google comes up with if you put 3^(3^(15)), although that turns out to be Bible quotes. Its calculator gives back Infinity, which here just means “it’s a really, really big number”. My Mac’s calculator function and my copy of Octave agree on that. It’s possible to find a better calculator that gives a meaningful answer, but you can work out roughly how big the number is just by hand, and for that matter, without resorting to anything you have to look up. I promise.

Reading the Comics, April 28, 2013


The flow of mathematics-themed comic strips almost dried up in April. I’m going to assume this reflects the kids of the cartoonists being on Spring Break, and teachers not placing exams immediately after the exam, in early to mid-March, and that we were just seeing the lag from that. I’m joking a little bit, but surely there’s some explanation for the rash of did-you-get-your-taxes-done comics appearing two weeks after April 15, and I’m fairly sure it isn’t the high regard United States newspaper syndicates have for their Canadian readership.

Dave Whamond’s Reality Check (April 8) uses the “infinity” symbol and tossed pizza dough together. The ∞ symbol, I understand, is credited to the English mathematician John Wallis, who introduced it in the Treatise on the Conic Sections, a text that made clearer how conic sections could be described algebraically. Wikipedia claims that Wallis had the idea that negative numbers were, rather than less than zero, actually greater than infinity, which we’d regard as a quirky interpretation, but (if I can verify it) it makes for an interesting point in figuring out how long people took to understand negative numbers like we believe we do today.

Jonathan Lemon’s Rabbits Against Magic (April 9) does a playing-the-odds joke, in this case in the efficiency of alligator repellent. The joke in this sort of thing comes to the assumption of independence of events — whether the chance that a thing works this time is the same as the chance of it working last time — and a bit of the idea that you find the probability of something working by trying it many times and counting the successes. Trusting in the Law of Large Numbers (and the independence of the events), this empirically-generated probability can be expected to match the actual probability, once you spend enough time thinking about what you mean by a term like “the actual probability”.

Continue reading “Reading the Comics, April 28, 2013”

Quick Little Calculus Puzzle


fluffy, one of my friends and regular readers, got to discussing with me a couple of limit problems, particularly, ones that seemed to be solved through L’Hopital’s Rule and then ran across some that don’t call for that tool of Freshman Calculus which you maybe remember. It’s the thing about limits of zero divided by zero, or infinity divided by infinity. (It can also be applied to a couple of other “indeterminate forms”; I remember when I took this level calculus the teacher explaining there were seven such forms. Without looking them up, I think they’re \frac00, \frac{\infty}{\infty}, 0^0, \infty^{0}, 0^{\infty}, 1^{\infty}, \mbox{ and } \infty - \infty but I would not recommend trusting my memory in favor of actually studying for your test.)

Anyway, fluffy put forth two cute little puzzles that I had immediate responses for, and then started getting plagued by doubts about, so I thought I’d put them out here for people who want the recreation. They’re both about taking the limit at zero of fractions, specifically:

\lim_{x \rightarrow 0} \frac{e^x}{x^e}

\lim_{x \rightarrow 0} \frac{x^e}{e^x}

where e here is the base of the natural logarithm, that is, that number just a little high of 2.71828 that mathematicians find so interesting even though it isn’t pi.

The limit is, if you want to be exact, a subtly and carefully defined idea that took centuries of really bright work to explain. But the first really good feeling that I really got for it is to imagine a function evaluated at the points near but not exactly at the target point — in the limits here, where x equals zero — and to see, if you keep evaluating x very near zero, are the values of your expression very near something? If it does, that thing the expression gets near is probably the limit at that point.

So, yes, you can plug in values of x like 0.1 and 0.01 and 0.0001 and so on into \frac{e^x}{x^e} and \frac{x^e}{e^x} and get a feeling for what the limit probably is. Saying what it definitely is takes a little more work.

Dilbert, Infinity, and 17


I dreamed recently that I opened the Sunday comics to find Scott Adams’s Dilbert strip turned into a somewhat lengthy, weird illustrated diatribe about how all numbers smaller than infinity were essentially the same, with the exception of the privileged number 17, which was the number of kinds of finite groups sharing some interesting property. Before I carry on I should point out that I have no reason to think that Scott Adams has any particularly crankish mathematical views, and no reason to think that he thinks much about infinity, finite groups, or the number 17. Imagining he has some fixation on them is wholly the creation of my unconscious or semiconscious mind, whatever parts of mind and body create dreams. But there are some points I can talk about from that start.

Continue reading “Dilbert, Infinity, and 17”

Reblog: Infinity Day


I hadn’t thought of this as “infinity day” coming up, but, why not? The Sciencelens blog here offers some comfortable familiar comments introducing the modern mathematical construction of infinitely large sets and how to compare sizes of infinitely large sets.

Sciencelens

Today is 8 August, the eighth of the eighth, 8-8.  Or, if you turn it on it’s side, a couple of infinity signs stacked on top of each other… Yep, it’s Infinity Day!

The concept of infinity refers to something that is without limits. It has application in various fields such as mathematics, physics, logic and computing. Infinite sets can be either countably infinite (for example the set of integers – you can count the individual numbers, even though they go on forever) or uncountably infinite (e.g. real numbers – there are also infinitely many of them, but you cannot count the individual numbers because they are not discreet entities).

Since infinity is really, really big – incomprehensibly so – it can lead to some amusing paradoxical scenarios; things that don’t make sense, by making complete sense.

An example of this is the Galileo Paradox, which states that

View original post 369 more words

Searching For Infinity On The New York Thruway


So with several examples I’ve managed to prove what nobody really questioned, that it’s possible to imagine a complicated curve like the route of the New York Thruway and assign to all the points on it, or at least to the center line of the road, a unique number that no other point on the road has. And, more, it’s possible to assign these unique numbers in many different ways, from any lower bound we like to any upper bound we like. It’s a nice system, particularly if we’re short on numbers to tell us when we approach Loudonville.

But I’m feeling ambitious right now and want to see how ridiculously huge, positive or negative, a number I can assign to some point on the road. Since we’d measured distances from a reference point by miles before and got a range of about 500, or by millimeters and got a range of about 800,000,000, obviously we could get to any number, however big or small, just by measuring distance using the appropriate unit: lay megaparsecs or angstroms down on the Thruway, or even use some awkward or contrived units. I want to shoot for infinitely big numbers. I’ll start by dividing the road in two.

After all, there are two halves to the Thruway, a northern and a southern end, both arranged like upside-down u’s across the state. Instead of thinking of the center line of the whole Thruway, then, think of the center lines of the northern road and of the southern. They’re both about the same 496-mile length, but, it’d be remarkable if they were exactly the same length. Let’s suppose the northern belt is 497 miles, and the southern 495. Pretty naturally the northern belt we can give numbers from 0 to 497, based on how far they are from the south-eastern end of the road; similarly, the southern belt gets numbers from 0 to 495, from the same reference point.

Continue reading “Searching For Infinity On The New York Thruway”