Reading the Comics, December 5, 2014: Good Questions Edition


This week’s bundle of mathematics-themed comic strips has a pretty nice blend, to my tastes: about half of them get at good and juicy topics, and about half are pretty quick and easy things to describe. So, my thanks to Comic Strip Master Command for the distribution.

Bill Watterson’s Calvin and Hobbes (December 1, rerun) slips in a pretty good probability question, although the good part is figuring out how to word it: what are the chances Calvin’s Dad was thinking of 92,376,051 of all the possible numbers out there? Given that there’s infinitely many possible choices, if every one of them is equally likely to be drawn, then the chance he was thinking of that particular number is zero. But Calvin’s Dad couldn’t be picking from every possible number; all humanity, working for its entire existence, will only ever think of finitely many numbers, which is the kind of fact that humbles me when I stare too hard at it. And people, asked to pick a number, have ones they prefer: 37, for example, or 17. Christopher Miller’s American Cornball: A Laffopedic Guide To The Formerly Funny (a fine guide to jokes that you see lingering around long after they were actually funny) notes that what number people tend to pick seems to vary in time, and in the early 20th century 23 was just as funny a number as you could think of on a moment’s notice.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (December 1) is entitled “how introductory physics problems are written”, and yeah, that’s about the way that a fair enough problem gets rewritten so as to technically qualify as a word problem. I think I’ve mentioned my favorite example of quietly out-of-touch word problems, a 1970s-era probability book which asked the probability of two out of five transistors in a radio failing. That was blatantly a rewrite of a problem about a five-vacuum-tube radio (my understanding is many radios in that era used five tubes) and each would have a non-negligible chance of failing on any given day. But that’s a slightly different problem, as the original question would have made good sense when it was composed, and it only in the updating became ridiculous.

Julie Larson’s The Dinette Set (December 2) illustrates one of the classic sampling difficulties: how can something be generally true if, in your experience, it isn’t? If you make the reasonable assumption that there’s nothing exceptional about you, then, shouldn’t your experience of, say, fraction of people who exercise, or average length of commute, or neighborhood crime rate be tolerably close to what’s really going on? You could probably build an entire philosophy-of-mathematics course around this panel before even starting the question of how do you get a fair survey of a population.

Scott Hilburn’s The Argyle Sweater (December 3) tells a Roman numeral joke that actually I don’t remember encountering before. Huh.

Samson’s Dark Side Of The Horse (December 3) does some play with mathematical symbols and of course I got distracted by thinking what kind of problem Horace was working on in the first panel; it looks obvious to me that it’s something about the orbit of one body around another. In principle, it might be anything, since the great discovery of algebra is that you can replace numbers with symbols like “a” and work out relations without having to know anything about them. “G”, for example, tends to mean the gravitational constant of the universe, and “GM” makes this identification almost certain: gravitation problems need the masses of a main body, like a planet, and a smaller body, like a satellite, and that’s usually represented as either m1 and m2 or as M and m.

In orbital mechanics problems, “a” often refers to the semimajor axis — the long diameter of the ellipse the orbiting body makes — and “e” the eccentricity — a measure of how close to a circle the ellipse is (an eccentricity of zero means it’s a circle — but the fact that there’s subscripts of k makes that identification suspect: subscripts are often used to distinguish which of multiple similar things you mean to talk about, and if it’s just one body orbiting the other there’s no need for that. So what is Horace working on?

The answer is: Horace is working on an orbital perturbation problem, describing how far from the true circular orbit a satellite will drift when you consider things like atmospheric drag and the slightly non-spherical shape of the Earth. ak is still a semi-major axis and ek the eccentricity, but of the little changes from the original orbit, rather than the original orbit themselves. And now I wonder if Samson plucked the original symbol just because it looked so graphically pleasant, or if Samson was slipping in a further joke about the way an attractive body will alter another body’s course.

Jenny Campbell’s Flo and Friends (December 4) offers a less exciting joke: it’s a simple word problem joke, playing on the ambiguity of “calculate how many seconds there are in the year”. Now, the dull way to work this out is to multiply 60 seconds per minute times 60 minutes per hour times 24 hours per day times 365 (or 365.25, or 365.2422 if you want to start that argument) days per year. But we can do almost as well and purely in our heads, if we remember that a million seconds is almost twelve days long. How many twelve-day stretches are there in a year? Well, right about 31 — after all, the year is (nearly) 12 groups of 31 days, and therefore it’s also 31 groups of 12 days. Therefore the year is about 31 million seconds long. If we pull out the calculator we find that a 365-day year is 31,536,000 seconds, but isn’t it more satisfying to say “about 31 million seconds” like we just did?

John Deering’s Strange Brew (December 4) took me the longest time to work out what the joke was supposed to be. I’m still not positive but I think it’s just one colleague sneering at the higher mathematics of another.

Todd the Dinosaur's abacus only goes up to 2.
Patrick Roberts’s Todd the Dinosaur (December 5) discovers there are numbers bigger than 2.

Patrick Roberts’s Todd the Dinosaur (December 5) discovers that some numbers are quite big ones, actually. There is a challenge in working with really big numbers, even if they’re usually bigger than 2. Usually we’re not interested in a number by itself, and would rather do some kind of calculation with it, and that’s boring to do too much of, but a computer can only work with so many digits at once. The average computer uses floating point arithmetic schemes which will track, at most, about 19 decimal digits, on the reasonable guess that twenty decimal digits is the difference between 3.1415926535897932384 and 3.1415926535897932385 and how often is that difference — a millionth of a millionth of a millionth of a percent — going to matter? If it does, then, you do the kind of work that gets numerical mathematicians their big paydays: using schemes that work with more digits, or chopping up a problem so that you never have to use all 86 relevant digits at once, or rewriting your calculation so that you don’t need so many digits of accuracy all at once.

David Bloomier, former math teacher, has the number 4 car, identified by addition, division, square root, and finger count.
Daniel Beyer’s Offbeat Comics (December 5) gives four ways to represent the number 4. Five, if you count the caption.

Daniel Beyer’s Offbeat Comics (December 5) gives a couple of ways to express the number 4 — including, look closely, holding up fingers — as part of a joke about the driver being a former mathematics teacher.

Greg Cravens’s The Buckets (December 5) is the old, old, old joke about never using algebra in real life. Do English teachers get this same gag about never using the knowledge of how to diagram sentences? In any case, I did use my knowledge of sentence-diagramming, and for the best possible application: I made fun of a guy on the Internet with it.

I advise against reading the comments — I mean, that’s normally good advice, but comic strips attract readers who want to complain about how stupid kids are anymore and strips that mention education give plenty of grounds for it — but I noticed one of the early comments said “try to do any repair at home without some understanding of it”. I like the claim, but, I can’t think of any home repair I’ve done that’s needed algebra. The most I’ve needed has been working out the area of a piece of plywood I needed, but if multiplying length by width is algebra than we’ve badly debased the term. Even my really ambitious project, building a PVC-frame pond cover, isn’t going to be one that uses algebra unless we take an extremely generous view of the subject.

Advertisements

Reading the Comics, September 28, 2014: Punning On A Sunday Edition


I honestly don’t intend this blog to become nothing but talk about the comic strips, but then something like this Sunday happens where Comic Strip Master Command decided to send out math joke priority orders and what am I to do? And here I had a wonderful bit about the natural logarithm of 2 that I meant to start writing sometime soon. Anyway, for whatever reason, there’s a lot of punning going on this time around; I don’t pretend to explain that.

Jason Poland’s Robbie and Bobby (September 25) puns off of a “meth lab explosion” in a joke that I’ve seen passed around Twitter and the like but not in a comic strip, possibly because I don’t tend to read web comics until they get absorbed into the Gocomics.com collective.

Brian Boychuk and Ron Boychuk’s The Chuckle Brothers (September 26) shows how an infinity pool offers the chance to finally, finally, do a one-point perspective drawing just like the art instruction book says.

Bill Watterson’s Calvin and Hobbes (September 27, rerun) wrapped up the latest round of Calvin not learning arithmetic with a gag about needing to know the difference between the numbers of things and the values of things. It also surely helps the confusion that the (United States) dime is a tiny coin, much smaller in size than the penny or nickel that it far out-values. I’m glad I don’t have to teach coin values to kids.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 27) mentions Lagrange points. These are mathematically (and physically) very interesting because they come about from what might be the first interesting physics problem. If you have two objects in the universe, attracting one another gravitationally, then you can describe their behavior perfectly and using just freshman or even high school calculus. For that matter, describing their behavior is practically what Isaac Newton invented his calculus to do.

Add in a third body, though, and you’ve suddenly created a problem that just can’t be done by freshman calculus, or really, done perfectly by anything but really exotic methods. You’re left with approximations, analytic or numerical. (Karl Fritiof Sundman proved in 1912 that one could create an infinite series solution, but it’s not a usable solution. To get a desired accuracy requires so many terms and so much calculation that you’re better off not using it. This almost sounds like the classical joke about mathematicians, coming up with solutions that are perfect but unusable. It is the most extreme case of a possible-but-not-practical solution I’m aware of, if stories I’ve heard about its convergence rate are accurate. I haven’t tried to follow the technique myself.)

But just because you can’t solve every problem of a type doesn’t mean you can’t solve some of them, and the ones you do solve might be useful anyway. Joseph-Louis Lagrange did that, studying the problem of one large body — like a sun, or a planet — and one middle-sized body — a planet, or a moon — and one tiny body — like an asteroid, or a satellite. If the middle-sized body is orbiting the large body in a nice circular orbit, then, there are five special points, dubbed the Lagrange points. A satellite that’s at one of those points (with the right speed) will keep on orbiting at the same rotational speed that the middle body takes around the large body; that is, the system will turn as if the large, middle, and tiny bodies were fixed in place, relative to each other.

Two of these spots, dubbed numbers 4 and 5, are stable: if your tiny body is not quite in the right location that’s all right, because it’ll stay nearby, much in the same way that if you roll a ball into a pit it’ll stay in the pit. But three of these spots, numbers 1, 2, and 3, are unstable: if your tiny body is not quite on those spots, it’ll fall away, in much the same way if you set a ball on the peak of the roof it’ll roll off one way or another.

When Lagrange noticed these points there wasn’t any particular reason to think of them as anything but a neat mathematical construct. But the points do exist, and they can be stable even if the medium body doesn’t have a perfectly circular orbit, or even if there are other planets in the universe, which throws off the nice simple calculations yet. Something like 1700 asteroids are known to exist in the number 4 and 5 Lagrange points for the Sun and Jupiter, and there are a handful known for Saturn and Neptune, and apparently at least five known for Mars. For Earth apparently there’s just the one known to exist, catchily named 2010 TK7, discovered in October 2010, although I’d be surprised if that were the only one. They’re just small.

Professor Peter Peddle has the crazy idea of studying boxing scientifically and preparing strategy accordingly.
Elliot Caplin and John Cullen Murphy’s Big Ben Bolt, from the 23rd of August, 1953 (rerun the 28th of September, 2014).

Elliot Caplin and John Cullen Murphy’s Big Ben Bolt (September 28, originally run August 23, 1953) has been on the Sunday strips now running a tale about a mathematics professor, Peter Peddle, who’s threatening to revolutionize Big Ben Bolt’s boxing world by reducing it to mathematical abstraction; past Sunday strips have even shown the rather stereotypically meek-looking professor overwhelming much larger boxers. The mathematics described here is nonsense, of course, but it’d be asking a bit of the comic strip writers to have a plausible mathematical description of the perfect boxer, after all.

But it’s hard for me anyway to not notice that the professor’s approach is really hard to gainsay. The past generation of baseball, particularly, has been revolutionized by a very mathematical, very rigorous bit of study, looking at questions like how many pitches can a pitcher actually throw before he loses control, and where a batter is likely to hit based on past performance (of this batter and of batters in general), and how likely is this player to have a better or a worse season if he’s signed on for another year, and how likely is it he’ll have a better enough season than some cheaper or more promising player? Baseball is extremely well structured to ask these kinds of questions, with football almost as good for it — else there wouldn’t be fantasy football leagues — and while I am ignorant of modern boxing, I would be surprised if a lot of modern boxing strategy weren’t being studied in Professor Peddle’s spirit.

Eric the Circle (September 28), this one by Griffinetsabine, goes to the Shapes Singles Bar for a geometry pun.

Bill Amend’s FoxTrot (September 28) (and not a rerun; the strip is new runs on Sundays) jumps on the Internet Instructional Video bandwagon that I’m sure exists somewhere, with child prodigy Jason Fox having the idea that he could make mathematics instruction popular enough to earn millions of dollars. His instincts are probably right, too: instructional videos that feature someone who looks cheerful and to be having fun and maybe a little crazy — well, let’s say eccentric — are probably the ones that will be most watched, at least. It’s fun to see people who are enjoying themselves, and the odder they act the better up to a point. I kind of hate to point out, though, that Jason Fox in the comic strip is supposed to be ten years old, implying that (this year, anyway) he was born nine years after Bob Ross died. I know that nothing ever really goes away anymore, but, would this be a pop culture reference that makes sense to Jason?

Tom Thaves’s Frank and Ernest (September 28) sets up the idea of Euclid as a playwright, offering a string of geometry puns.

Jef Mallet’s Frazz (September 28) wonders about why trains show up so often in story problems. I’m not sure that they do, actually — haven’t planes and cars taken their place here, too? — although the reasons aren’t that obscure. Questions about the distance between things changing over time let you test a good bit of arithmetic and algebra while being naturally about stuff it’s reasonable to imagine wanting to know. What more does the homework-assigner want?

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 28) pops back up again with the prospect of blowing one’s mind, and it is legitimately one of those amazing things, that e^{i \pi} = -1 . It is a remarkable relationship between a string of numbers each of which are mind-blowing in their ways — negative 1, and pi, and the base of the natural logarithms e, and dear old i (which, multiplied by itself, is equal to negative 1) — and here they are all bundled together in one, quite true, relationship. I do have to wonder, though, whether anyone who would in a social situation like this understand being told “e raised to the i times pi power equals negative one”, without the framing of “we’re talking now about exponentials raised to imaginary powers”, wouldn’t have already encountered this and had some of the mind-blowing potential worn off.

What’s Going On In The Old Universe


Last time in this infinitely-old universe puzzle, we found that by making a universe of only three kinds of atoms (hydrogen, iron, and uranium) which shifted to one another with fixed chances over the course of time, we’d end up with the same distribution of atoms regardless of what the distribution of hydrogen, iron, and uranium was to start with. That seems like it might require explanation.

(For people who want to join us late without re-reading: I got to wondering what the universe might look like if it just ran on forever, stars fusing lighter elements into heavier ones, heavier elements fissioning into lighter ones. So I looked at a toy model where there were three kinds of atoms, dubbed hydrogen for the lighter elements, iron for the middle, and uranium for the heaviest, and made up some numbers saying how likely hydrogen was to be turned into heavier atoms over the course of a billion years, how likely iron was to be turned into something heavier or lighter, and how likely uranium was to be turned into lighter atoms. And sure enough, if the rates of change stay constant, then the universe goes from any initial distribution of atoms to a single, unchanging-ever-after mix in surprisingly little time, considering it’s got a literal eternity to putter around.)

The first question, it seems, is whether I happened to pick a freak set of numbers for the change of one kind of atom to another. It’d be a stroke of luck, but, these things happen. In my first model, I gave hydrogen a 25 percent chance of turning to iron, and no chance of turning to helium, in a billion years. Let’s change that so any given atom has a 20 percent chance of turning to iron and a 20 percent chance of turning to uranium. Similarly, instead of iron having no chance of turning to hydrogen and a 40 percent chance of turning to uranium, let’s try giving each iron atom a 25 percent chance of becoming hydrogen and a 25 percent chance of becoming uranium. Uranium, first time around, had a 40 percent chance of becoming hydrogen and a 40 percent chance of becoming iron. Let me change that to a 60 percent chance of becoming hydrogen and a 20 percent chance of becoming iron.

With these chances of changing, a universe that starts out purely hydrogen settles on being about 50 percent hydrogen, a little over 28 percent iron, and a little over 21 percent uranium in about ten billion years. If the universe starts out with equal amounts of hydrogen, iron, and uranium, however, it settles over the course of eight billion years to … 50 percent hydrogen, a little over 28 percent iron, and a little over 21 percent uranium. If it starts out with no hydrogen and the rest of matter evenly split between iron and uranium, then over the course of twelve billion years it gets to … 50 percent hydrogen, a litte over 28 percent iron, and a little over 21 percent uranium.

Perhaps the problem is that I’m picking these numbers, and I’m biased towards things that are pretty nice ones — halves and thirds and two-fifths and the like — and maybe that’s causing this state where the universe settles down very quickly and stops changing any. We should at least try that before supposing there’s necessarily something more than coincidence going on here.

So I set the random number generator to produce some element changes which can’t be susceptible to my bias for simple numbers. Give hydrogen a 44.5385 percent chance of staying hydrogen, a 10.4071 percent chance of becoming iron, and a 45.0544 percent chance of becoming uranium. Give iron a 25.2174 percent chance of becoming hydrogen, a 32.0355 percent chance of staying iron, and a 42.7471 percent chance of becoming uranium. Give uranium a 2.9792 percent chance of becoming hydrogen, a 48.9201 percent chance of becoming iron, and a 48.1007 percent chance of staying uranium. (Clearly, by the way, I’ve given up on picking numbers that might reflect some actual if simple version of nucleosynthesis and I’m just picking numbers for numbers’ sake. That’s all right; the question this essay is, are we stuck getting an unchanging yet infinitely old universe?)

And the same thing happens again: after nine billion years a universe starting from pure hydrogen will be about 18.7 percent hydrogen, about 35.7 percent iron, and about 45.6 percent uranium. Starting from no hydrogen, 50 percent iron, and 50 percent uranium, we get to the same distribution in again about nine billion years. A universe beginning with equal amounts hydrogen, iron, and uranium under these rules gets to the same distribution after only seven billion years.

The conclusion is this settling down doesn’t seem to be caused by picking numbers that are too particularly nice-looking or obviously attractive; and the distributions don’t seem to have an obvious link to what the probabilities of changing are. There seems to be something happening here, though admittedly we haven’t proven that rigorously. To spoil a successor article in this thread: there is something here, and it’s a big thing.

(Also, no, we’re not stuck with an unchanging universe, and good on you if you can see ways to keep the universe changing without, like, having the probability of one atom changing to another itself vary in time.)

To Build A Universe


So I kept thinking about what the distribution of elements might be in an infinitely old universe. It’s a tough problem to consider, if you want to do it exactly right, since you have to consider how stars turn lighter atoms in a blistering array of possibilities. Besides the nearly hundred different elements — which represents the count of how many protons are in the nucleus — each element has multiple isotopes — representing how many neutrons are in the nucleus — and I don’t know how many there are to consider but it’s certainly at least several hundred to deal with. There’s probably a major work in the astrophysics literature describing all the ways atoms and their isotopes can get changed over the course of a star’s lifetime, either actually existing or waiting for an indefatigable writer to make it her life’s work.

But I can make a toy model, because I want to do mathematics, and I can see what I might learn from that. This is basically a test vehicle: I want to see whether building a more accurate model is likely to be worthwhile.

For my toy model of the universe I will pretend there are only three kinds of atoms in the universe: hydrogen, iron, and uranium. These represent the lighter elements — which can fuse together to release energy — and Iron-56 — which can’t release energy by fusing into heavier or by fissioning into lighter elements — and the heavier elements — which can fission apart to release energy and lighter elements. I can describe the entire state of the universe with three numbers, saying what fraction of the universe is hydrogen, what fraction is iron, and what fraction is uranium. So these are pretty powerful toys.

Over time the stars in this universe will change some of their hydrogen into iron, and some of their iron into uranium. The uranium will change some of itself into hydrogen and into iron. How much? I’m going to make up some nice simple numbers and say that over the course of a billion years, one-quarter of all the hydrogen in the universe will be changed into iron; three-quarters of the hydrogen will remain hydrogen. Over that same time, let’s say two-fifths of all the iron in the universe will be changed to uranium, while the remaining three-fifths will remain iron. And the uranium? Well, that decays; let’s say that two-fifths of the uranium will become hydrogen, two-fifths will become iron, and the remaining one-fifth will stay uranium. If I had more elements in the universe I could make a more detailed, subtle model, and if I didn’t feel quite so lazy I might look up more researched figures for this, but, again: toy model.

I’m by the way assuming this change of elements is constant for all time and that it doesn’t depend on the current state of the universe. There are sound logical reasons behind this: to have the rate of nucleosynthesis vary in time would require me to do more work. As above: toy model.

So what happens? This depends on what we start with, sure. Let’s imagine the universe starts out made of nothing but hydrogen, so that the composition of the universe is 100% hydrogen, 0% iron, 0% uranium. After the first billion years, some of the hydrogen will be changed to iron, but there wasn’t any iron so there’s no uranium now. The universe’s composition would be 75% hydrogen, 25% iron, 0% uranium. After the next billion years three-quarters of the hydrogen becomes iron and two-fifths of the iron becomes uranium, so we’ll be at 56.25% hydrogen, 33.75% iron, 10% uranium. Another billion years passes, and once again three-quarters of the hydrogen becomes iron, two-fifths of the iron becomes uranium, and two-fifths of the uranium becomes hydrogen and another two-fifths becomes iron. This is a lot of arithmetic but the results are easy enough to find: 46.188% hydrogen, 38.313% iron, 15.5% uranium. After some more time we have 40.841% hydrogen, 40.734% iron, 18.425% uranium. It’s maybe a fair question whether the universe is going to run itself all the way down to have nothing but iron, but, the next couple billions of years show things settling down. Let me put all this in a neat little table.

Composition of the Toy Universe
Age
(billion years)
Hydrogen Iron Uranium
0 100% 0% 0%
1 75% 25% 0%
2 56.25% 33.75% 10%
3 46.188% 38.313% 15.5%
4 40.841% 40.734% 18.425%
5 38% 42.021% 19.979%
6 36.492% 42.704% 20.804%
7 35.691% 43.067% 21.242%
8 35.265% 43.260% 21.475%
9 35.039% 43.362% 21.599%
10 34.919% 43.417% 21.665%
11 34.855% 43.446% 21.700%
12 34.821% 43.461% 21.718%
13 34.803% 43.469% 21.728%
14 34.793% 43.473% 21.733%
15 34.788% 43.476% 21.736%
16 34.786% 43.477% 21.737%
17 34.784% 43.478% 21.738%
18 34.783% 43.478% 21.739%
19 34.783% 43.478% 21.739%
20 34.783% 43.478% 21.739%

We could carry on but there’s really no point: the numbers aren’t going to change again. Well, probably they’re changing a little bit, four or more places past the decimal point, but this universe has settled down to a point where just as much hydrogen is being lost to fusion as is being created by fission, and just as much uranium is created by fusion as is lost by fission, and just as much iron is being made as is being turned into uranium. There’s a balance in the universe.

At least, that’s the balance if we start out with a universe made of nothing but hydrogen. What if it started out with a different breakdown, for example, a universe that started as one-third hydrogen, one-third iron, and one-third uranium? In that case, as the universe ages, the distribution of elements goes like this:

Composition of the Toy Universe
Age
(billion years)
Hydrogen Iron Uranium
0 33.333% 33.333% 33.333%
1 38.333% 41.667% 20%
2 36.75% 42.583% 20.667%
3 35.829% 43.004% 21.167%
4 35.339% 43.226% 21.435%
5 35.078% 43.345% 21.578%
10 34.795% 43.473% 21.732%
15 34.783% 43.478% 21.739%

We’ve gotten to the same distribution, only a tiny bit faster. (It doesn’t quite get there after fourteen billion years.) I hope it won’t shock you if I say that we’d see the same thing if we started with a universe made of nothing but iron, or of nothing but uranium, or of other distributions. Some take longer to settle down than others, but, they all seem to converge on the same long-term fate for the universe.

Obviously there’s something special about this toy universe, with three kinds of atoms changing into one another at these rates, which causes it to end up at the same distribution of atoms.

In A Really Old Universe


So, my thinking about an “Olbers Universe” infinitely old and infinitely large in extent brought me to a depressing conclusion that such a universe has to be empty, or at least just about perfectly empty. But we can still ponder variations on the idea and see if that turns up anything. For example, what if we have a universe that’s infinitely old, but not infinite in extent, either because space is limited or because all the matter in the universe is in one neighborhood?

Suppose we have stars. Stars do many things. One of them is they turn elements into other elements, mostly heavier ones. For the lightest of atoms — hydrogen and helium, for example — stars create heavier elements by fusing them together. Making the heavier atoms from these lighter ones, in the net, releases energy, which is why fusion is constantly thought of as a potential power source. And that’s good for making elements up to as heavy as iron. After that point, fusion becomes a net energy drain. But heavier elements still get made as the star dies: when it can’t produce energy by fusion anymore the great mass of the star collapses on itself and that shoves together atom nucleuses regardless of the fact this soaks up more energy. (Warning: the previous description of nucleosynthesis, as it’s called, was very simplified and a little anthropomorphized, and wasn’t seriously cross-checked against developments in the field since I was a physics-and-astronomy undergraduate. Do not use it to attempt to pass your astrophysics qualifier. It’s good enough for everyday use, what with how little responsibility most of us have for stars.)

The important thing to me is that a star begins as a ball of dust, produced by earlier stars (and in our, finite universe, from the Big Bang, which produced a lot of hydrogen and helium and some of the other lightest elements), that condenses into a star, turns many of the elements into it into other elements, and then returns to a cloud of dust that mixes with other dust clouds and forms new stars.

Now. Over time, over the generations of stars, we tend to get heavier elements out of the mix. That’s pretty near straightforward mathematics: if you have nothing but hydrogen and helium — atoms that have one or two protons in the nucleus — it’s quite a trick to fuse them together into something with more than two, three, or four protons in the nucleus. If you have hydrogen, helium, lithium, and beryllium to work with — one, two, three, and four protons in the nucleus — it’s easier to get products of from two up to eight protons in the nucleus. And so on. The tendency is for each generation of stars to have relatively less hydrogen and helium and relatively more of the heavier atoms in its makeup.

So what happens if you have infinitely many generations? The first guess would be, well, stars will keep gathering together and fusing together as long as there are any elements lighter than iron, so that eventually there’d be a time when there were no (or at least no significant) amounts of elements lighter than iron, at which point the stars cease to shine. There’s nothing more to fuse together to release energy and we have a universe of iron- and heavier-metal ex-stars. I’m not sure if this is an even more depressing infinite universe than the infinitely large, infinitely old one which couldn’t have anything at all in it.

Except that this isn’t the whole story. Heavier elements than iron can release energy by fission, splitting into two or more lighter elements. Uranium and radium and a couple other elements are famous for them, but I believe every element has at least some radioactive isotopes. Popular forms of fission will produce alpha particles, which is what they named this particular type of radioactive product before they realized it was just the nucleus of a helium atom. Other types of radioactive decay will produce neutrons, which, if they’re not in the nucleus of an atom, will last an average of about fifteen minutes before decaying into a proton — a hydrogen nucleus — and some other stuff. Some more exotic forms of radioactive decay can produce protons by themselves, too. I haven’t checked the entire set of possible fission byproducts but I wouldn’t be surprised if most of the lighter elements can be formed by something’s breaking down.

In short, even if we fused the entire contents of the universe into atoms heavier than iron, we would still get out a certain amount of hydrogen and of helium, and also of other lighter elements. In short, stars turn hydrogen and helium, eventually, into very heavy elements; but the very heavy elements turn at least part of themselves back into hydrogen and helium.

So, it seems plausible, at least qualitatively, that given enough time to work there’d be a stable condition: hydrogen and helium being turned into heavier atoms at the same rate that heavier atoms are producing hydrogen and helium in their radioactive decay. And an infinitely old universe has enough time for anything.

And that’s, to me, anyway, an interesting question: what would the distribution of elements look like in an infinitely old universe?

(I should point out here that I don’t know. I would be surprised if someone in the astrophysics community has worked it out, at least in a rough form, for an as-authentic-as-possible set of assumptions about how nucleosynthesis works. But I am so ignorant of the literature I’m not sure how to even find the answer they’ve posed. I can think of it as a mathematical puzzle at least, though.)