Reading the Comics, May 28, 2016: Visual Interest Will Never Reappear Edition


OK, that’s three weeks in a row in which all my mathematically-themed comic strips are from Gocomics. Maybe I should commission some generic Reading The Comics art from the cartoonists and artists I know. It could make things more exciting on a visually dull week like this.

Mark Anderson’s Andertoons got its entry on the 25th. We draw the name “exponents” from the example of Michael Stifel, a 16th-century German theologian/mathematician. He’d described them as exponents in his influential 1544 book Arithmetica Integra. But I don’t know why he picked the name “exponent” rather than some other word.

Nate Fakes’s Break of Day for the 25th is the anthropomorphic numerals gag for this week.

Dave Blazek’s Loose Parts for the 25th is not quite the anthropomorphic shapes joke for this week. The word “isosceles” does trace back to Greek, of course. The first part comes from “isos”, meaning equal; you see the same root in terms like “isobar” and “isometric view”. The “sceles” part comes from “skelos”, meaning leg. Say what you will about an isosceles triangle, and you may as they’ve got poor hearing, but they do have two legs with the same length. If you want to say an equilateral triangle, which has three legs the same length, is an isosceles triangle you can do that. You’ll be right. But you will look like you’re trying a little too hard to make a point, the way you do if you point to a square and start off by calling it a rhombus.

Donna A Lewis’s Reply All Lite for the 25th tries doing a joke about doing mathematics by hand being a sign of old age. If we’re talking about arithmetic … I could go along with that, grudgingly. Calculator applications are so reliable and so quick that it’s hard to justify doing arithmetic by hand unless it’s a very simple problem. If you have fun doing that, good.

But if we’re doing real mathematics, the working out of a model and the implications of that, or working out calculus or group theory or graph theory or the like? There are surely some people who can do all this work in their heads and I am impressed by that. But much of real mathematics is working out implications of ideas, and that’s done so very well by hand. I haven’t found a way of typing in strings of expressions which makes it easier for me to think about the mathematics rather than the formatting. And I would believe in a note-taking program that was as sensitive and precise as pen on paper. I haven’t seen one yet, though. (I have small handwriting, and the applications I’ve tried turn all my writing into tiny, disconnected dots and scribbles.)

Ralph Hagen’s The Barn for the 27th is superficially about Olbers’s Paradox. If there’s an infinitely large, infinitely old universe, then how can the night sky by dark? The light of all those stars should come together to make night even more brilliantly blazing than the daytime sun. This is a legitimate calculus problem. The reasoning is sound. The light of a star trillions upon trillions of light-years away may be impossibly faint. But there are so many stars that would be that far away that they would be, on average, about as bright as the sun is. Integral calculus tells us what happens when we have infinitely large numbers of impossibly tiny things added together. In the case of stars, infinitely many impossibly faint stars would come together to an infinitely bright night sky. That night is dark tells us: the universe can’t be infinitely large and infinitely old. There must be limits to how far away anything can be.

The Barn reappears in my attention on the 28th, with a subverted word problem joke.

Stars On The Flag


The United States flag has as many stars as the country has states. For a long while star arrangement was up to the flag-maker, with no specific rule in place. This is where the occasional weird and ugly 19th century flag comes from. But the arrangement has got codified. It’s to be stars in rows, or at least staggered rows.

It’s easy to understand how to arrange 48 stars, which the flag had for a while. Or 49 stars, which it had almost long enough to get a new flag made. 50 stars, which it’s had for longer than 48 now, are familiar from experience. But a natural question is how to arrange an arbitrary number of stars? And courtesy the MTBos Blogbog, linking to essays about mathematics, I don’t have to answer it myself.

Experience First Math reviewed the problem recently. You can find a pattern by playing around, of course. It’s not very efficient, but we don’t need new flags very often. We don’t need to save time on this.

And uniformly spacing stuff can be a hard problem. For example, no one knows what is the most uniform way to put thirteen spots on the surface of a sphere. We’re certain that we’re close, though.

This is a simpler problem. We have to fit stars in a rectangle. The stars have to be arranged in rows, or in staggered rows. Each row can’t be too much bigger or smaller than its neighbors. And with that, a little bit of factoring and geometric reasoning and counting produces a lovely result: how to generally arrange stars.

Well, almost generally. There are some numbers that don’t work with alternating rows. We’ve seen this before. There were some ugly compromises necessary to have a 44-star flag, in the 1890s, or the 36-star flag in 1865. But with this alternating-rows example, we’ve got a hint to working out other nearly-staggered and nearly-alternating row patterns.

Combining Matrices And Model Universes


I would like to resume talking about matrices and really old universes and the way nucleosynthesis in these model universes causes atoms to keep settling down to peculiar but unchanging distribution.

I’d already described how a matrix offers a nice way to organize elements, and in ways that encode information about the context of the elements by where they’re placed. That’s useful and saves some writing, certainly, although by itself it’s not that interesting. Matrices start to get really powerful when, first, the elements being stored are things on which you can do something like arithmetic with pairs of them. Here I mostly just mean that you can add together two elements, or multiply them, and get back something meaningful.

This typically means that the matrix is made up of a grid of numbers, although that isn’t actually required, just, really common if we’re trying to do mathematics.

Then you get the ability to add together and multiply together the matrices themselves, turning pairs of matrices into some new matrix, and building something that works a lot like arithmetic on these matrices.

Adding one matrix to another is done in almost the obvious way: add the element in the first row, first column of the first matrix to the element in the first row, first column of the second matrix; that’s the first row, first column of your new matrix. Then add the element in the first row, second column of the first matrix to the element in the first row, second column of the second matrix; that’s the first row, second column of the new matrix. Add the element in the second row, first column of the first matrix to the element in the second row, first column of the second matrix, and put that in the second row, first column of the new matrix. And so on.

This means you can only add together two matrices that are the same size — the same number of rows and of columns — but that doesn’t seem unreasonable.

You can also do something called scalar multiplication of a matrix, in which you multiply every element in the matrix by the same number. A scalar is just a number that isn’t part of a matrix. This multiplication is useful, not least because it lets us talk about how to subtract one matrix from another: to find the difference of the first matrix and the second, scalar-multiply the second matrix by -1, and then add the first to that product. But you can do scalar multiplication by any number, by two or minus pi or by zero if you feel like it.

I should say something about notation. When we want to write out these kinds of operations efficiently, of course, we turn to symbols to represent the matrices. We can, in principle, use any symbols, but by convention a matrix usually gets represented with a capital letter, A or B or M or P or the like. So to add matrix A to matrix B, with the result being matrix C, we can write out the equation “A + B = C”, which is about as simple as we could hope to see. Scalars are normally written in lowercase letters, often Greek letters, if we don’t know what the number is, so that the scalar multiplication of the number r and the matrix A would be the product “rA”, and we could write the difference between matrix A and matrix B as “A + (-1)B” or “A – B”.

Matrix multiplication, now, that is done by a process that sounds like doubletalk, and it takes a while of practice to do it right. But there are good reasons for doing it that way and we’ll get to one of those reasons by the end of this essay.

To multiply matrix A and matrix B together, we do multiply various pairs of elements from both matrix A and matrix B. The surprising thing is that we also add together sets of these products, per this rule.

Take the element in the first row, first column of A, and multiply it by the element in the first row, first column of B. Add to that the product of the element in the first row, second column of A and the second row, first column of B. Add to that total the product of the element in the first row, third column of A and the third row, second column of B, and so on. When you’ve run out of columns of A and rows of B, this total is the first row, first column of the product of the matrices A and B.

Plenty of work. But we have more to do. Take the product of the element in the first row, first column of A and the element in the first row, second column of B. Add to that the product of the element in the first row, second column of A and the element in the second row, second column of B. Add to that the product of the element in the first row, third column of A and the element in the third row, second column of B. And keep adding those up until you’re out of columns of A and rows of B. This total is the first row, second column of the product of matrices A and B.

This does mean that you can multiply matrices of different sizes, provided the first one has as many columns as the second has rows. And the product may be a completely different size from the first or second matrices. It also means it might be possible to multiply matrices in one order but not the other: if matrix A has four rows and three columns, and matrix B has three rows and two columns, then you can multiply A by B, but not B by A.

My recollection on learning this process was that this was crazy, and the workload ridiculous, and I imagine people who get this in Algebra II, and don’t go on to using mathematics later on, remember the process as nothing more than an unpleasant blur of doing a lot of multiplying and addition for some reason or other.

So here is one of the reasons why we do it this way. Let me define two matrices:

A = \left(\begin{tabular}{c c c}  3/4 & 0 & 2/5 \\  1/4 & 3/5 & 2/5 \\  0 & 2/5 & 1/5  \end{tabular}\right)

B = \left(\begin{tabular}{c} 100 \\ 0 \\ 0 \end{tabular}\right)

Then matrix A times B is

AB = \left(\begin{tabular}{c}  3/4 * 100 + 0 * 0 + 2/5 * 0 \\  1/4 * 100 + 3/5 * 0 + 2/5 * 0 \\  0 * 100 + 2/5 * 0 + 1/5 * 0  \end{tabular}\right) = \left(\begin{tabular}{c}  75 \\  25 \\  0  \end{tabular}\right)

You’ve seen those numbers before, of course: the matrix A contains the probabilities I put in my first model universe to describe the chances that over the course of a billion years a hydrogen atom would stay hydrogen, or become iron, or become uranium, and so on. The matrix B contains the original distribution of atoms in the toy universe, 100 percent hydrogen and nothing anything else. And the product of A and B was exactly the distribution after that first billion years: 75 percent hydrogen, 25 percent iron, nothing uranium.

If we multiply the matrix A by that product again — well, you should expect we’re going to get the distribution of elements after two billion years, that is, 56.25 percent hydrogen, 33.75 percent iron, 10 percent uranium, but let me write it out anyway to show:

\left(\begin{tabular}{c c c}  3/4 & 0 & 2/5 \\  1/4 & 3/5 & 2/5 \\  0 & 2/5 & 1/5  \end{tabular}\right)\left(\begin{tabular}{c}  75 \\ 25 \\ 0  \end{tabular}\right) = \left(\begin{tabular}{c}  3/4 * 75 + 0 * 25 + 2/5 * 0 \\  1/4 * 75 + 3/5 * 25 + 2/5 * 0 \\  0 * 75 + 2/5 * 25 + 1/5 * 0  \end{tabular}\right) = \left(\begin{tabular}{c}  56.25 \\  33.75 \\  10  \end{tabular}\right)

And if you don’t know just what would happen if we multipled A by that product, you aren’t paying attention.

This also gives a reason why matrix multiplication is defined this way. The operation captures neatly the operation of making a new thing — in the toy universe case, hydrogen or iron or uranium — out of some combination of fractions of an old thing — again, the former distribution of hydrogen and iron and uranium.

Or here’s another reason. Since this matrix A has three rows and three columns, you can multiply it by itself and get a matrix of three rows and three columns out of it. That matrix — which we can write as A2 — then describes how two billion years of nucleosynthesis would change the distribution of elements in the toy universe. A times A times A would give three billion years of nucleosynthesis; A10 ten billion years. The actual calculating of the numbers in these matrices may be tedious, but it describes a complicated operation very efficiently, which we always want to do.

I should mention another bit of notation. We usually use capital letters to represent matrices; but, a matrix that’s just got one column is also called a vector. That’s often written with a lowercase letter, with a little arrow above the letter, as in \vec{x} , or in bold typeface, as in x. (The arrows are easier to put in writing, the bold easier when you were typing on typewriters.) But if you’re doing a lot of writing this out, and know that (say) x isn’t being used for anything but vectors, then even that arrow or boldface will be forgotten. Then we’d write the product of matrix A and vector x as just Ax.  (There are also cases where you put a little caret over the letter; that’s to denote that it’s a vector that’s one unit of length long.)

When you start writing vectors without an arrow or boldface you start to run the risk of confusing what symbols mean scalars and what ones mean vectors. That’s one of the reasons that Greek letters are popular for scalars. It’s also common to put scalars to the left and vectors to the right. So if one saw “rMx”, it would be expected that r is a scalar, M a matrix, and x a vector, and if they’re not then this should be explained in text nearby, preferably before the equations. (And of course if it’s work you’re doing, you should know going in what you mean the letters to represent.)

Lewis Carroll and my Playing With Universes


I wanted to explain what’s going on that my little toy universes with three kinds of elements changing to one another keep settling down to steady and unchanging distributions of stuff. I can’t figure a way to do that other than to introduce some actual mathematics notation, and I’m aware that people often find that sort of thing off-putting, or terrifying, or at the very least unnerving.

There’s fair reason to: the entire point of notation is to write down a lot of information in a way that’s compact or easy to manipulate. Using it at all assumes that the writer, and the reader, are familiar with enough of the background that they don’t have to have it explained at each reference. To someone who isn’t familiar with the topic, then, the notation looks like symbols written down without context and without explanation. It’s much like wandering into an Internet forum where all the local acronyms are unfamiliar, the in-jokes are heavy on the ground, and for some reason nobody actually spells out Dave Barry’s name in full.

Let me start by looking at the descriptions of my toy universe: it’s made up of a certain amount of hydrogen, a certain amount of iron, and a certain amount of uranium. Since I’m not trying to describe, like, where these elements are or how they assemble into toy stars or anything like that, I can describe everything that I find interesting about this universe with three numbers. I had written those out as “40% hydrogen, 35% iron, 25% uranium”, for example, or “10% hydrogen, 60% iron, 30% uranium”, or whatever the combination happens to be. If I write the elements in the same order each time, though, I don’t really need to add “hydrogen” and “iron” and “uranium” after the numbers, and if I’m always looking at percentages I don’t even need to add the percent symbol. I can just list the numbers and let the “percent hydrogen” or “percent iron” or “percent uranium” be implicit: “40, 35, 25”, for one universe’s distribution, or “10, 60, 30” for another.

Letting the position of where a number is written carry information is a neat and easy way to save effort, and when you notice what’s happening you realize it’s done all the time: it’s how writing the date as “7/27/14” makes any sense, or how a sports scoreboard might compactly describe the course of the game:

0 1 0   1 2 0   0 0 4   8 13 1
2 0 0   4 0 0   0 0 1   7 15 0

To use the notation you need to understand how the position encodes information. “7/27/14” doesn’t make sense unless you know the first number is the month, the second the day within the month, and the third the year in the current century, and that there’s an equally strong convention putting the day within the month first and the month in the year second presents hazards when the information is ambiguous. Reading the box score requires knowing the top row reflects the performance of the visitor’s team, the bottom row the home team, and the first nine columns count the runs by each team in each inning, while the last three columns are the total count of runs, hits, and errors by that row’s team.

When you put together the numbers describing something into a rectangular grid, that’s termed a matrix of numbers. The box score for that imaginary baseball game is obviously one, but it’s also a matrix if I just write the numbers describing my toy universe in a row, or a column:

40
35
25

or

10
60
30

If a matrix has just the one column, it’s often called a vector. If a matrix has the same number of rows as it has columns, it’s called a square matrix. Matrices and vectors are also usually written with either straight brackets or curled parentheses around them, left and right, but that’s annoying to do in HTML so please just pretend.

The matrix as mathematicians know it today got put into a logically rigorous form around 1850 largely by the work of James Joseph Sylvester and Arthur Cayley, leading British mathematicians who also spent time teaching in the United States. Both are fascinating people, Sylvester for his love of poetry and language and for an alleged incident while briefly teaching at the University of Virginia which the MacTutor archive of mathematician biographies, citing L S Feuer, describes so: “A student who had been reading a newspaper in one of Sylvester’s lectures insulted him and Sylvester struck him with a sword stick. The student collapsed in shock and Sylvester believed (wrongly) that he had killed him. He fled to New York where one os his elder brothers was living.” MacTutor goes on to give reasons why this story may be somewhat distorted, although it does suggest one solution to the problem of students watching their phones in class.

Cayley, meanwhile, competes with Leonhard Euler for prolific range in a mathematician. MacTutor cites him as having at least nine hundred published papers, covering pretty much all of modern mathematics, including work that would underlie quantum mechanics and non-Euclidean geometry. He wrote about 250 papers in the fourteen years he was working as a lawyer, which would by itself have made him a prolific mathematician. If you need to bluff your way through a mathematical conversation, saying “Cayley” and following it with any random noun will probably allow you to pass.

MathWorld mentions, to my delight, that Lewis Carroll, in his secret guise as Charles Dodgson, came in to the world of matrices in 1867 with an objection to the very word. In writing about them, Dodgson said, “”I am aware that the word `Matrix’ is already in use to express the very meaning for which I use the word `Block’; but surely the former word means rather the mould, or form, into which algebraical quantities may be introduced, than an actual assemblage of such quantities”. He’s got a fair point, really, but there wasn’t much to be done in 1867 to change the word, and it’s only gotten more entrenched since then.

What’s Going On In The Old Universe


Last time in this infinitely-old universe puzzle, we found that by making a universe of only three kinds of atoms (hydrogen, iron, and uranium) which shifted to one another with fixed chances over the course of time, we’d end up with the same distribution of atoms regardless of what the distribution of hydrogen, iron, and uranium was to start with. That seems like it might require explanation.

(For people who want to join us late without re-reading: I got to wondering what the universe might look like if it just ran on forever, stars fusing lighter elements into heavier ones, heavier elements fissioning into lighter ones. So I looked at a toy model where there were three kinds of atoms, dubbed hydrogen for the lighter elements, iron for the middle, and uranium for the heaviest, and made up some numbers saying how likely hydrogen was to be turned into heavier atoms over the course of a billion years, how likely iron was to be turned into something heavier or lighter, and how likely uranium was to be turned into lighter atoms. And sure enough, if the rates of change stay constant, then the universe goes from any initial distribution of atoms to a single, unchanging-ever-after mix in surprisingly little time, considering it’s got a literal eternity to putter around.)

The first question, it seems, is whether I happened to pick a freak set of numbers for the change of one kind of atom to another. It’d be a stroke of luck, but, these things happen. In my first model, I gave hydrogen a 25 percent chance of turning to iron, and no chance of turning to helium, in a billion years. Let’s change that so any given atom has a 20 percent chance of turning to iron and a 20 percent chance of turning to uranium. Similarly, instead of iron having no chance of turning to hydrogen and a 40 percent chance of turning to uranium, let’s try giving each iron atom a 25 percent chance of becoming hydrogen and a 25 percent chance of becoming uranium. Uranium, first time around, had a 40 percent chance of becoming hydrogen and a 40 percent chance of becoming iron. Let me change that to a 60 percent chance of becoming hydrogen and a 20 percent chance of becoming iron.

With these chances of changing, a universe that starts out purely hydrogen settles on being about 50 percent hydrogen, a little over 28 percent iron, and a little over 21 percent uranium in about ten billion years. If the universe starts out with equal amounts of hydrogen, iron, and uranium, however, it settles over the course of eight billion years to … 50 percent hydrogen, a little over 28 percent iron, and a little over 21 percent uranium. If it starts out with no hydrogen and the rest of matter evenly split between iron and uranium, then over the course of twelve billion years it gets to … 50 percent hydrogen, a litte over 28 percent iron, and a little over 21 percent uranium.

Perhaps the problem is that I’m picking these numbers, and I’m biased towards things that are pretty nice ones — halves and thirds and two-fifths and the like — and maybe that’s causing this state where the universe settles down very quickly and stops changing any. We should at least try that before supposing there’s necessarily something more than coincidence going on here.

So I set the random number generator to produce some element changes which can’t be susceptible to my bias for simple numbers. Give hydrogen a 44.5385 percent chance of staying hydrogen, a 10.4071 percent chance of becoming iron, and a 45.0544 percent chance of becoming uranium. Give iron a 25.2174 percent chance of becoming hydrogen, a 32.0355 percent chance of staying iron, and a 42.7471 percent chance of becoming uranium. Give uranium a 2.9792 percent chance of becoming hydrogen, a 48.9201 percent chance of becoming iron, and a 48.1007 percent chance of staying uranium. (Clearly, by the way, I’ve given up on picking numbers that might reflect some actual if simple version of nucleosynthesis and I’m just picking numbers for numbers’ sake. That’s all right; the question this essay is, are we stuck getting an unchanging yet infinitely old universe?)

And the same thing happens again: after nine billion years a universe starting from pure hydrogen will be about 18.7 percent hydrogen, about 35.7 percent iron, and about 45.6 percent uranium. Starting from no hydrogen, 50 percent iron, and 50 percent uranium, we get to the same distribution in again about nine billion years. A universe beginning with equal amounts hydrogen, iron, and uranium under these rules gets to the same distribution after only seven billion years.

The conclusion is this settling down doesn’t seem to be caused by picking numbers that are too particularly nice-looking or obviously attractive; and the distributions don’t seem to have an obvious link to what the probabilities of changing are. There seems to be something happening here, though admittedly we haven’t proven that rigorously. To spoil a successor article in this thread: there is something here, and it’s a big thing.

(Also, no, we’re not stuck with an unchanging universe, and good on you if you can see ways to keep the universe changing without, like, having the probability of one atom changing to another itself vary in time.)

To Build A Universe


So I kept thinking about what the distribution of elements might be in an infinitely old universe. It’s a tough problem to consider, if you want to do it exactly right, since you have to consider how stars turn lighter atoms in a blistering array of possibilities. Besides the nearly hundred different elements — which represents the count of how many protons are in the nucleus — each element has multiple isotopes — representing how many neutrons are in the nucleus — and I don’t know how many there are to consider but it’s certainly at least several hundred to deal with. There’s probably a major work in the astrophysics literature describing all the ways atoms and their isotopes can get changed over the course of a star’s lifetime, either actually existing or waiting for an indefatigable writer to make it her life’s work.

But I can make a toy model, because I want to do mathematics, and I can see what I might learn from that. This is basically a test vehicle: I want to see whether building a more accurate model is likely to be worthwhile.

For my toy model of the universe I will pretend there are only three kinds of atoms in the universe: hydrogen, iron, and uranium. These represent the lighter elements — which can fuse together to release energy — and Iron-56 — which can’t release energy by fusing into heavier or by fissioning into lighter elements — and the heavier elements — which can fission apart to release energy and lighter elements. I can describe the entire state of the universe with three numbers, saying what fraction of the universe is hydrogen, what fraction is iron, and what fraction is uranium. So these are pretty powerful toys.

Over time the stars in this universe will change some of their hydrogen into iron, and some of their iron into uranium. The uranium will change some of itself into hydrogen and into iron. How much? I’m going to make up some nice simple numbers and say that over the course of a billion years, one-quarter of all the hydrogen in the universe will be changed into iron; three-quarters of the hydrogen will remain hydrogen. Over that same time, let’s say two-fifths of all the iron in the universe will be changed to uranium, while the remaining three-fifths will remain iron. And the uranium? Well, that decays; let’s say that two-fifths of the uranium will become hydrogen, two-fifths will become iron, and the remaining one-fifth will stay uranium. If I had more elements in the universe I could make a more detailed, subtle model, and if I didn’t feel quite so lazy I might look up more researched figures for this, but, again: toy model.

I’m by the way assuming this change of elements is constant for all time and that it doesn’t depend on the current state of the universe. There are sound logical reasons behind this: to have the rate of nucleosynthesis vary in time would require me to do more work. As above: toy model.

So what happens? This depends on what we start with, sure. Let’s imagine the universe starts out made of nothing but hydrogen, so that the composition of the universe is 100% hydrogen, 0% iron, 0% uranium. After the first billion years, some of the hydrogen will be changed to iron, but there wasn’t any iron so there’s no uranium now. The universe’s composition would be 75% hydrogen, 25% iron, 0% uranium. After the next billion years three-quarters of the hydrogen becomes iron and two-fifths of the iron becomes uranium, so we’ll be at 56.25% hydrogen, 33.75% iron, 10% uranium. Another billion years passes, and once again three-quarters of the hydrogen becomes iron, two-fifths of the iron becomes uranium, and two-fifths of the uranium becomes hydrogen and another two-fifths becomes iron. This is a lot of arithmetic but the results are easy enough to find: 46.188% hydrogen, 38.313% iron, 15.5% uranium. After some more time we have 40.841% hydrogen, 40.734% iron, 18.425% uranium. It’s maybe a fair question whether the universe is going to run itself all the way down to have nothing but iron, but, the next couple billions of years show things settling down. Let me put all this in a neat little table.

Composition of the Toy Universe
Age
(billion years)
Hydrogen Iron Uranium
0 100% 0% 0%
1 75% 25% 0%
2 56.25% 33.75% 10%
3 46.188% 38.313% 15.5%
4 40.841% 40.734% 18.425%
5 38% 42.021% 19.979%
6 36.492% 42.704% 20.804%
7 35.691% 43.067% 21.242%
8 35.265% 43.260% 21.475%
9 35.039% 43.362% 21.599%
10 34.919% 43.417% 21.665%
11 34.855% 43.446% 21.700%
12 34.821% 43.461% 21.718%
13 34.803% 43.469% 21.728%
14 34.793% 43.473% 21.733%
15 34.788% 43.476% 21.736%
16 34.786% 43.477% 21.737%
17 34.784% 43.478% 21.738%
18 34.783% 43.478% 21.739%
19 34.783% 43.478% 21.739%
20 34.783% 43.478% 21.739%

We could carry on but there’s really no point: the numbers aren’t going to change again. Well, probably they’re changing a little bit, four or more places past the decimal point, but this universe has settled down to a point where just as much hydrogen is being lost to fusion as is being created by fission, and just as much uranium is created by fusion as is lost by fission, and just as much iron is being made as is being turned into uranium. There’s a balance in the universe.

At least, that’s the balance if we start out with a universe made of nothing but hydrogen. What if it started out with a different breakdown, for example, a universe that started as one-third hydrogen, one-third iron, and one-third uranium? In that case, as the universe ages, the distribution of elements goes like this:

Composition of the Toy Universe
Age
(billion years)
Hydrogen Iron Uranium
0 33.333% 33.333% 33.333%
1 38.333% 41.667% 20%
2 36.75% 42.583% 20.667%
3 35.829% 43.004% 21.167%
4 35.339% 43.226% 21.435%
5 35.078% 43.345% 21.578%
10 34.795% 43.473% 21.732%
15 34.783% 43.478% 21.739%

We’ve gotten to the same distribution, only a tiny bit faster. (It doesn’t quite get there after fourteen billion years.) I hope it won’t shock you if I say that we’d see the same thing if we started with a universe made of nothing but iron, or of nothing but uranium, or of other distributions. Some take longer to settle down than others, but, they all seem to converge on the same long-term fate for the universe.

Obviously there’s something special about this toy universe, with three kinds of atoms changing into one another at these rates, which causes it to end up at the same distribution of atoms.

In A Really Old Universe


So, my thinking about an “Olbers Universe” infinitely old and infinitely large in extent brought me to a depressing conclusion that such a universe has to be empty, or at least just about perfectly empty. But we can still ponder variations on the idea and see if that turns up anything. For example, what if we have a universe that’s infinitely old, but not infinite in extent, either because space is limited or because all the matter in the universe is in one neighborhood?

Suppose we have stars. Stars do many things. One of them is they turn elements into other elements, mostly heavier ones. For the lightest of atoms — hydrogen and helium, for example — stars create heavier elements by fusing them together. Making the heavier atoms from these lighter ones, in the net, releases energy, which is why fusion is constantly thought of as a potential power source. And that’s good for making elements up to as heavy as iron. After that point, fusion becomes a net energy drain. But heavier elements still get made as the star dies: when it can’t produce energy by fusion anymore the great mass of the star collapses on itself and that shoves together atom nucleuses regardless of the fact this soaks up more energy. (Warning: the previous description of nucleosynthesis, as it’s called, was very simplified and a little anthropomorphized, and wasn’t seriously cross-checked against developments in the field since I was a physics-and-astronomy undergraduate. Do not use it to attempt to pass your astrophysics qualifier. It’s good enough for everyday use, what with how little responsibility most of us have for stars.)

The important thing to me is that a star begins as a ball of dust, produced by earlier stars (and in our, finite universe, from the Big Bang, which produced a lot of hydrogen and helium and some of the other lightest elements), that condenses into a star, turns many of the elements into it into other elements, and then returns to a cloud of dust that mixes with other dust clouds and forms new stars.

Now. Over time, over the generations of stars, we tend to get heavier elements out of the mix. That’s pretty near straightforward mathematics: if you have nothing but hydrogen and helium — atoms that have one or two protons in the nucleus — it’s quite a trick to fuse them together into something with more than two, three, or four protons in the nucleus. If you have hydrogen, helium, lithium, and beryllium to work with — one, two, three, and four protons in the nucleus — it’s easier to get products of from two up to eight protons in the nucleus. And so on. The tendency is for each generation of stars to have relatively less hydrogen and helium and relatively more of the heavier atoms in its makeup.

So what happens if you have infinitely many generations? The first guess would be, well, stars will keep gathering together and fusing together as long as there are any elements lighter than iron, so that eventually there’d be a time when there were no (or at least no significant) amounts of elements lighter than iron, at which point the stars cease to shine. There’s nothing more to fuse together to release energy and we have a universe of iron- and heavier-metal ex-stars. I’m not sure if this is an even more depressing infinite universe than the infinitely large, infinitely old one which couldn’t have anything at all in it.

Except that this isn’t the whole story. Heavier elements than iron can release energy by fission, splitting into two or more lighter elements. Uranium and radium and a couple other elements are famous for them, but I believe every element has at least some radioactive isotopes. Popular forms of fission will produce alpha particles, which is what they named this particular type of radioactive product before they realized it was just the nucleus of a helium atom. Other types of radioactive decay will produce neutrons, which, if they’re not in the nucleus of an atom, will last an average of about fifteen minutes before decaying into a proton — a hydrogen nucleus — and some other stuff. Some more exotic forms of radioactive decay can produce protons by themselves, too. I haven’t checked the entire set of possible fission byproducts but I wouldn’t be surprised if most of the lighter elements can be formed by something’s breaking down.

In short, even if we fused the entire contents of the universe into atoms heavier than iron, we would still get out a certain amount of hydrogen and of helium, and also of other lighter elements. In short, stars turn hydrogen and helium, eventually, into very heavy elements; but the very heavy elements turn at least part of themselves back into hydrogen and helium.

So, it seems plausible, at least qualitatively, that given enough time to work there’d be a stable condition: hydrogen and helium being turned into heavier atoms at the same rate that heavier atoms are producing hydrogen and helium in their radioactive decay. And an infinitely old universe has enough time for anything.

And that’s, to me, anyway, an interesting question: what would the distribution of elements look like in an infinitely old universe?

(I should point out here that I don’t know. I would be surprised if someone in the astrophysics community has worked it out, at least in a rough form, for an as-authentic-as-possible set of assumptions about how nucleosynthesis works. But I am so ignorant of the literature I’m not sure how to even find the answer they’ve posed. I can think of it as a mathematical puzzle at least, though.)

In A Really Big Universe


I’d got to thinking idly about Olbers’ Paradox, the classic question of why the night sky is dark. It’s named for Heinrich Wilhelm Olbers, 1758-1840, who of course was not the first person to pose the problem nor to give a convincing answer to it, but, that’s the way naming rights go.

It doesn’t sound like much of a question at first, after all, it’s night. But if we suppose the universe is infinitely large and is infinitely old, then, along the path of any direction you look in the sky, day or night, there’ll be a star. The star may be very far away, so that it’s very faint; but it takes up much less of the sky from being so far away. The result is that the star’s intensity, as a function of how much of the sky it takes up, is no smaller. And there’ll be stars shining just as intensely in directions that are near to that first star. The sky in an infinitely large, infinitely old universe should be a wall of stars.

Oh, some stars will be dimmer than average, and some brighter, but that doesn’t matter much. We can suppose the average star is of average brightness and average size for reasons that are right there in the name of the thing; it makes the reasoning a little simpler and doesn’t change the result.

The reason there is darkness is that our universe is neither infinitely large nor infinitely old. There aren’t enough stars to fill the sky and there’s not enough time for the light from all of them to get to us.

But we can still imagine standing on a planet in such an Olbers Universe (to save myself writing “infinitely large and infinitely old” too many times), with enough vastness and enough time to have a night sky that looks like a shell of starlight, and that’s what I was pondering. What might we see if you looked at the sky, in these conditions?

Well, light, obviously; we can imagine the sky looking as bright as the sun, but in all directions above the horizon. The sun takes up a very tiny piece of the sky — it’s about as wide across as your thumb, held at arm’s length, and try it if you don’t believe me (better, try it with the Moon, which is about the same size as the Sun and easier to look at) — so, multiply that brightness by the difference between your thumb and the sky and imagine the investment in sunglasses this requires.

It’s worse than that, though. Yes, in any direction you look there’ll be a star, but if you imagine going on in that direction there’ll be another star, eventually. And another one past that, and another past that yet. And the light — the energy — of those stars shining doesn’t disappear because there’s a star between it and the viewer. The heat will just go into warming up the stars in its path and get radiated through.

This is why interstellar dust, or planets, or other non-radiating bodies doesn’t answer why the sky could be dark in a vast enough universe. Anything that gets enough heat put into it will start to glow and start to shine from that light. The stars will slow down the waves of heat from the stars behind them, but given enough time, it will get through, and in an infinitely old universe, there is enough time.

The conclusion, then, is that our planet in an Olbers Universe would get an infinite amount of heat pouring onto it, at all times. It’s hard to see how life could possibly exist in the circumstance; water would boil away — rock would boil away — and the planet just would evaporate into dust.

Things get worse, though: it’s not just our planet that would get boiled away like this, but as far as I can tell, the stars too. Each star would be getting an infinite amount of heat pouring into it. It seems to me this requires the matter making up the stars to get so hot it would boil away, just as the atmosphere and water and surface of the imagined planet would, until the star — until all stars — disintegrate. At this point I have to think of the great super-science space-opera writers of the early 20th century, listening to the description of a wave of heat that boils away a star, and sniffing, “Amateurs. Come back when you can boil a galaxy instead”. Well, the galaxy would boil too, for the same reasons.

Even once the stars have managed to destroy themselves, though, the remaining atoms would still have a temperature, and would still radiate faint light. And that faint light, multiplied by the infinitely many atoms and all the time they have, would still accumulate to an infinitely great heat. I don’t know how hot you have to get to boil a proton into nothingness — or a quark — but if there is any temperature that does it, it’d be able to.

So the result, I had to conclude, is that an infinitely large, infinitely old universe could exist only if it didn’t have anything in it, or at least if it had nothing that wasn’t at absolute zero in it. This seems like a pretty dismal result and left me looking pretty moody for a while, even I was sure that EE “Doc” Smith would smile at me for working out the heat-death of quarks.

Of course, there’s no reason that a universe has to, or even should, be pleasing to imagine. And there is a little thread of hope for life, or at least existence, in a Olbers Universe.

All the destruction-of-everything comes about from the infinitely large number of stars, or other radiating bodies, in the universe. If there’s only finitely much matter in the universe, then, their total energy doesn’t have to add up to the point of self-destruction. This means giving up an assumption that was slipped into my Olbers Universe without anyone noticing: the idea that it’s about uniformly distributed. If you compare any two volumes of equal size, from any time, they have about the same number of stars in them. This is known in cosmology as “isotropy”.

Our universe seems to have this isotropy. Oh, there are spots where you can find many stars (like the center of a galaxy) and spots where there are few (like, the space in-between galaxies), but the galaxies themselves seem to be pretty uniformly distributed.

But an imagined universe doesn’t have to have this property. If we suppose an Olbers Universe without then we can have stars and planets and maybe even life. It could even have many times the mass, the number of stars and everything, that our universe has, spread across something much bigger than our universe. But it does mean that this infinitely large, infinitely old universe will have all its matter clumped together into some section, and nearly all the space — in a universe with an incredible amount of space — will be empty.

I suppose that’s better than a universe with nothing at all, but somehow only a little better. Even though it could be a universe with more stars and more space occupied than our universe has, that infinitely vast emptiness still haunts me.

(I’d like to note, by the way, that all this universe-building and reasoning hasn’t required any equations or anything like that. One could argue this has diverted from mathematics and cosmology into philosophy, and I wouldn’t dispute that, but can imagine philosophers might.)

Playing With Tiles


The Math Less Travelled, one of the blogs that I read, posted yesterday a link to another web site, a Tiling Database created by Brian Wichmann and Tony Lee. The database is exactly what it says on the label: a collection of patterns which one could put on a flat surface and extend outward in both directions as far as you like. In principle, you could get any of them to spruce up your kitchen, although some of them would be a bit staggering to face in the morning, even in other color schemes.

Continue reading “Playing With Tiles”

%d bloggers like this: