Reading the Comics, August 25, 2014: Summer Must Be Ending Edition


I’m sorry to admit that I can’t think of a unifying theme for the most recent round of comic strips which mention mathematical topics, other than that this is one of those rare instances of nobody mentioning infinite numbers of typing monkeys. I have to guess Comic Strip Master Command sent around a notice that summer vacation (in the United States) will be ending soon, so cartoonists should start practicing their mathematics jokes.

Tom Toles’s Randolph Itch, 2 a.m. (August 22, rerun) presents what’s surely the lowest-probability outcome of a toss of a fair coin: its landing on the edge. (I remember this as also the gimmick starting a genial episode of The Twilight Zone.) It’s a nice reminder that you do have to consider all the things that might affect an experiment’s outcome before concluding what are likely and unlikely results.

It also inspires, in me, a side question: a single coin, obviously, has a tiny chance of landing on its side. A roll of coins has a tiny chance of not landing on its side. How thick a roll has to be assembled before the chance of landing on the side and the chance of landing on either edge become equal? (Without working it out, my guess is it’s about when the roll of coins is as tall as it is across, but I wouldn’t be surprised if it were some slightly oddball thing like the roll has to be the square root of two times the diameter of the coins.)

Doug Savage’s Savage Chickens (August 22) presents an “advanced Sudoku”, in a puzzle that’s either trivially easy or utterly impossible: there’s so few constraints on the numbers in the presented puzzle that it’s not hard to write in digits that will satisfy the results, but, if there’s one right answer, there’s not nearly enough information to tell which one it is. I do find interesting the problem of satisfiability — giving just enough information to solve the puzzle, without allowing more than one solution to be valid — an interesting one. I imagine there’s a very similar problem at work in composing Ivasallay’s Find The Factors puzzles.

Phil Frank and Joe Troise’s The Elderberries (August 24, rerun) presents a “mind aerobics” puzzle in the classic mathematical form of drawing socks out of a drawer. Talking about pulling socks out of drawers suggests a probability puzzle, but the question actually takes it a different direction, into a different sort of logic, and asks about how many socks need to be taken out in order to be sure you have one of each color. The easiest way to apply this is, I believe, to use what’s termed the “pigeon hole principle”, which is one of those mathematical concepts so clear it’s hard to actually notice it. The principle is just that if you have fewer pigeon holes than you have pigeons, and put every pigeon in a pigeon hole, then there’s got to be at least one pigeon hole with more than one pigeons. (Wolfram’s MathWorld credits the statement to Peter Gustav Lejeune Dirichlet, a 19th century German mathematician with a long record of things named for him in number theory, probability, analysis, and differential equations.)

Dave Whamond’s Reality Check (August 24) pulls out the old little pun about algebra and former romantic partners. You’ve probably seen this joke passed around your friends’ Twitter or Facebook feeds too.

Julie Larson’s The Dinette Set (August 25) presents some terrible people’s definition of calculus, as “useless math with letters instead of numbers”, which I have to gripe about because that seems like a more on-point definition of algebra. I’m actually sympathetic to the complaint that calculus is useless, at least if you don’t go into a field that requires it (although that’s rather a circular definition, isn’t it?), but I don’t hold to the idea that whether something is “useful” should determine whether it’s worth learning. My suspicion is that things you find interesting are worth learning, either because you’ll find uses for them, or just because you’ll be surrounding yourself with things you find interesting.

Shifting from numbers to letters, as are used in algebra and calculus, is a great advantage. It allows you to prove things that are true for many problems at once, rather than just the one you’re interested in at the moment. This generality may be too much work to bother with, at least for some problems, but it’s easy to see what’s attractive in solving a problem once and for all.

Mikael Wulff and Anders Morgenthaler’s WuMo (August 25) uses a couple of motifs none of which I’m sure are precisely mathematical, but that seem close enough for my needs. First there’s the motif of Albert Einstein as just being so spectacularly brilliant that he can form an argument in favor of anything, regardless of whether it’s right or wrong. Surely that derives from Einstein’s general reputation of utter brilliance, perhaps flavored by the point that he was able to show how common-sense intuitive ideas about things like “it’s possible to say whether this event happened before or after that event” go wrong. And then there’s the motif of a sophistic argument being so massive and impressive in its bulk that it’s easier to just give in to it rather than try to understand or refute it.

It’s fair of the strip to present Einstein as beginning with questions about how one perceives the universe, though: his relativity work in many ways depends on questions like “how can you tell whether time has passed?” and “how can you tell whether two things happened at the same time?” These are questions which straddle physics, mathematics, and philosophy, and trying to find answers which are logically coherent and testable produced much of the work that’s given him such lasting fame.

Machines That Think About Logarithms


I confess that I picked up Edmund Callis Berkeley’s Giant Brains: Or Machines That Think, originally published 1949, from the library shelf as a source of cheap ironic giggles. After all, what is funnier than an attempt to explain to a popular audience that, wild as it may be to contemplate, electrically-driven machines could “remember” information and follow “programs” of instructions based on different conditions satisfied by that information? There’s a certain amount of that, though not as much as I imagined, and a good amount of descriptions of how the hardware of different partly or fully electrical computing machines of the 1940s worked.

But a good part, and the most interesting part, of the book is about algorithms, the ways to solve complicated problems without demanding too much computing power. This is fun to read because it showcases the ingenuity and creativity required to do useful work. The need for ingenuity will never leave us — we will always want to compute things that are a little beyond our ability — but to see how it’s done for a simple problem is instructive, if for nothing else to learn the kinds of tricks you can do to get the most of your computing resources.

The example that most struck me and which I want to share is from the chapter on the IBM Automatic Sequence-Controlled Calculator, built at Harvard at a cost of “somewhere near 3 or 4 hundred thousand dollars, if we leave out some of the cost of research and development, which would have been done whether or not this particular machine had ever been built”. It started working in April 1944, and wasn’t officially retired until 1959. It could store 72 numbers, each with 23 decimal digits. Like most computers (then and now) it could do addition and subtraction very quickly, in the then-blazing speed of about a third of a second; it could do multiplication tolerably quickly, in about six seconds; and division, rather slowly, in about fifteen seconds.

The process I want to describe is the taking of logarithms, and why logarithms should be interesting to compute takes a little bit of justification, although it’s implicitly there just in how fast calculations get done. Logarithms let one replace the multiplication of numbers with their addition, for a considerable savings in time; better, they let you replace the division of numbers with subtraction. They further let you turn exponentiation and roots into multiplication and division, which is almost always faster to do. Many human senses seem to work on a logarithmic scale, as well: we can tell that one weight is twice as heavy as the other much more reliably than we can tell that one weight is four pounds heavier than the other, or that one light is twice as bright as the other rather than is ten lumens brighter.

What the logarithm of a number is depends on some other, fixed, quantity, known as the base. In principle any positive number will do as base; in practice, these days people mostly only care about base e (which is a little over 2.718), the “natural” logarithm, because it has some nice analytic properties. Back in the day, which includes when this book was written, we also cared about base 10, the “common” logarithm, because we mostly work in base ten. I have heard of people who use base 2, but haven’t seen them myself and must regard them as an urban legend. The other bases are mostly used by people who are writing homework problems for the part of the class dealing with logarithms. To some extent it doesn’t matter what base you use. If you work out the logarithm in one base, you can convert that to the logarithm in another base by a multiplication.

The logarithm of some number in your base is the exponent you have to raise the base to to get your desired number. For example, the logarithm of 100, in base 10, is going to be 2 because 102 is 100, and the logarithm of e1/3 (a touch greater than 1.3956), in base e, is going to be 1/3. To dig deeper in my reserve of in-jokes, the logarithm of 2038, in base 10, is approximately 3.3092, because 103.3092 is just about 2038. The logarithm of e, in base 10, is about 0.4343, and the logarithm of 10, in base e, is about 2.303. Your calculator will verify all that.

All that talk about “approximately” should have given you some hint of the trouble with logarithms. They’re only really easy to compute if you’re looking for whole powers of whatever your base is, and that if your base is 10 or 2 or something else simple like that. If you’re clever and determined you can work out, say, that the logarithm of 2, base 10, has to be close to 0.3. It’s fun to do that, but it’ll involve such reasoning as “two to the tenth power is 1,024, which is very close to ten to the third power, which is 1,000, so therefore the logarithm of two to the tenth power must be about the same as the logarithm of ten to the third power”. That’s clever and fun, but it’s hardly systematic, and it doesn’t get you many digits of accuracy.

So when I pick up this thread I hope to explain one way to produce as many decimal digits of a logarithm as you could want, without asking for too much from your poor Automatic Sequence-Controlled Calculator.

Writing About E (Not By Me)


It’s tricky to write about e. That is, it’s not a difficult thing to write about, but it’s hard to find the audience for this number. It’s quite important, mathematically, but it hasn’t got an easy-to-understand definition like pi’s “the circumference of a circle divided by its diameter”. E’s most concise definition, I guess, is “the base of the natural logarithm”, which as an explanation to someone who hasn’t done much mathematics is only marginally more enlightening than slapping him with a slice of cold pizza. And it hasn’t got the sort of renown of something like the golden ratio which makes the number sound familiar and even welcoming.

Still, the Mean Green Math blog (“Explaining the whys of mathematics”) has been running a series of essays explaining e, by looking at different definitions of the number. The most recent of this has been the twelfth in the series, and they seem to be arranged in chronological order under the category of Algebra II topics, and under the tag of “E” essays, although I can’t promise how long it’ll be before you have to flip through so many “older” page links on the category and tag pages that it’s harder to find that way. If I see a master page collecting all the Definitions Of E essays into one guide I’ll post that.

Reading the Comics, August 16, 2014: Saturday Morning Breakfast Cereal Edition


Zach Weinersmith’s Saturday Morning Breakfast Cereal is a long-running and well-regarded web comic that I haven’t paid much attention to because I don’t read many web comics. XKCD, Newshounds, and a couple others are about it. I’m not opposed to web comics, mind you, I just don’t get around to following them typically. But Saturday Morning Breakfast Cereal started running on Gocomics.com recently, and Gocomics makes it easy to start adding comics, and I did, and that’s served me well for the mathematical comics collections since it’s been a pretty dry spell. I bet it’s the summer vacation.

Saturday Morning Breakfast Cereal (July 30) seems like a reach for inclusion in mathematical comics since its caption is “Physicists make lousy firemen” and it talks about the action of a fire — and of the “living things” caught in the fire — as processes producing wobbling and increases in disorder. That’s an effort at describing a couple of ideas, the first that the temperature of a thing is connected to the speed at which the molecules making it up are moving, and the second that the famous entropy is a never-decreasing quantity. We get these notions from thermodynamics and particularly the attempt to understand physically important quantities like heat and temperature in terms of particles — which have mass and position and momentum — and their interactions. You could write an entire blog about entropy and probably someone does.

Randy Glasbergen’s Glasbergen Cartoons (August 2) uses the word-problem setup for a strip of “Dog Math” and tries to remind everyone teaching undergraduates the quotient rule that it really could be worse, considering.

Nate Fakes’s Break of Day (August 4) takes us into an anthropomorphized world that isn’t numerals for a change, to play on the idea that skill in arithmetic is evidence of particular intelligence.

Jiggs tries to explain addition to his niece, and learns his brother-in-law is his brother-in-law.

George McManus’s _Bringing Up Father_, originally run the 12th of April, 1949.

George McManus’s Bringing Up Father (August 11, rerun from April 12, 1949) goes to the old motif of using money to explain addition problems. It’s not a bad strategy, of course: in a way, arithmetic is one of the first abstractions one does, in going from the idea that a hundred of something added to a hundred fifty of something will yield two hundred fifty of that thing, and it doesn’t matter what that something is: you’ve abstracted out the ideas of “a hundred plus a hundred fifty”. In algebra we start to think about whether we can add together numbers without knowing what one or both of the numbers are — “x plus y” — and later still we look at adding together things that aren’t necessarily numbers.

And back to Saturday Morning Breakfast Cereal (August 13), which has a physicist type building a model of his “lack of dates” based on random walks and, his colleague objects, “only works if we assume you’re an ideal gas molecule”. But models are often built on assumptions that might, taken literally, be nonsensical, like imagining the universe to have exactly three elements in it, supposing that people never act against their maximal long-term economic gain, or — to summon a traditional mathematics/physics joke — assuming a spherical cow. The point of a model is to capture some interesting behavior, and avoid the complicating factors that can’t be dealt with precisely or which don’t relate to the behavior being studied. Choosing how to simplify is the skill and art that earns mathematicians the big money.

And then for August 16, Saturday Morning Breakfast Cereal does a binary numbers joke. I confess my skepticism that there are any good alternate-base-number jokes, but you might like them.

Combining Matrices And Model Universes


I would like to resume talking about matrices and really old universes and the way nucleosynthesis in these model universes causes atoms to keep settling down to peculiar but unchanging distribution.

I’d already described how a matrix offers a nice way to organize elements, and in ways that encode information about the context of the elements by where they’re placed. That’s useful and saves some writing, certainly, although by itself it’s not that interesting. Matrices start to get really powerful when, first, the elements being stored are things on which you can do something like arithmetic with pairs of them. Here I mostly just mean that you can add together two elements, or multiply them, and get back something meaningful.

This typically means that the matrix is made up of a grid of numbers, although that isn’t actually required, just, really common if we’re trying to do mathematics.

Then you get the ability to add together and multiply together the matrices themselves, turning pairs of matrices into some new matrix, and building something that works a lot like arithmetic on these matrices.

Adding one matrix to another is done in almost the obvious way: add the element in the first row, first column of the first matrix to the element in the first row, first column of the second matrix; that’s the first row, first column of your new matrix. Then add the element in the first row, second column of the first matrix to the element in the first row, second column of the second matrix; that’s the first row, second column of the new matrix. Add the element in the second row, first column of the first matrix to the element in the second row, first column of the second matrix, and put that in the second row, first column of the new matrix. And so on.

This means you can only add together two matrices that are the same size — the same number of rows and of columns — but that doesn’t seem unreasonable.

You can also do something called scalar multiplication of a matrix, in which you multiply every element in the matrix by the same number. A scalar is just a number that isn’t part of a matrix. This multiplication is useful, not least because it lets us talk about how to subtract one matrix from another: to find the difference of the first matrix and the second, scalar-multiply the second matrix by -1, and then add the first to that product. But you can do scalar multiplication by any number, by two or minus pi or by zero if you feel like it.

I should say something about notation. When we want to write out these kinds of operations efficiently, of course, we turn to symbols to represent the matrices. We can, in principle, use any symbols, but by convention a matrix usually gets represented with a capital letter, A or B or M or P or the like. So to add matrix A to matrix B, with the result being matrix C, we can write out the equation “A + B = C”, which is about as simple as we could hope to see. Scalars are normally written in lowercase letters, often Greek letters, if we don’t know what the number is, so that the scalar multiplication of the number r and the matrix A would be the product “rA”, and we could write the difference between matrix A and matrix B as “A + (-1)B” or “A – B”.

Matrix multiplication, now, that is done by a process that sounds like doubletalk, and it takes a while of practice to do it right. But there are good reasons for doing it that way and we’ll get to one of those reasons by the end of this essay.

To multiply matrix A and matrix B together, we do multiply various pairs of elements from both matrix A and matrix B. The surprising thing is that we also add together sets of these products, per this rule.

Take the element in the first row, first column of A, and multiply it by the element in the first row, first column of B. Add to that the product of the element in the first row, second column of A and the second row, first column of B. Add to that total the product of the element in the first row, third column of A and the third row, second column of B, and so on. When you’ve run out of columns of A and rows of B, this total is the first row, first column of the product of the matrices A and B.

Plenty of work. But we have more to do. Take the product of the element in the first row, first column of A and the element in the first row, second column of B. Add to that the product of the element in the first row, second column of A and the element in the second row, second column of B. Add to that the product of the element in the first row, third column of A and the element in the third row, second column of B. And keep adding those up until you’re out of columns of A and rows of B. This total is the first row, second column of the product of matrices A and B.

This does mean that you can multiply matrices of different sizes, provided the first one has as many columns as the second has rows. And the product may be a completely different size from the first or second matrices. It also means it might be possible to multiply matrices in one order but not the other: if matrix A has four rows and three columns, and matrix B has three rows and two columns, then you can multiply A by B, but not B by A.

My recollection on learning this process was that this was crazy, and the workload ridiculous, and I imagine people who get this in Algebra II, and don’t go on to using mathematics later on, remember the process as nothing more than an unpleasant blur of doing a lot of multiplying and addition for some reason or other.

So here is one of the reasons why we do it this way. Let me define two matrices:

A = \left(\begin{tabular}{c c c}  3/4 & 0 & 2/5 \\  1/4 & 3/5 & 2/5 \\  0 & 2/5 & 1/5  \end{tabular}\right)

B = \left(\begin{tabular}{c} 100 \\ 0 \\ 0 \end{tabular}\right)

Then matrix A times B is

AB = \left(\begin{tabular}{c}  3/4 * 100 + 0 * 0 + 2/5 * 0 \\  1/4 * 100 + 3/5 * 0 + 2/5 * 0 \\  0 * 100 + 2/5 * 0 + 1/5 * 0  \end{tabular}\right) = \left(\begin{tabular}{c}  75 \\  25 \\  0  \end{tabular}\right)

You’ve seen those numbers before, of course: the matrix A contains the probabilities I put in my first model universe to describe the chances that over the course of a billion years a hydrogen atom would stay hydrogen, or become iron, or become uranium, and so on. The matrix B contains the original distribution of atoms in the toy universe, 100 percent hydrogen and nothing anything else. And the product of A and B was exactly the distribution after that first billion years: 75 percent hydrogen, 25 percent iron, nothing uranium.

If we multiply the matrix A by that product again — well, you should expect we’re going to get the distribution of elements after two billion years, that is, 56.25 percent hydrogen, 33.75 percent iron, 10 percent uranium, but let me write it out anyway to show:

\left(\begin{tabular}{c c c}  3/4 & 0 & 2/5 \\  1/4 & 3/5 & 2/5 \\  0 & 2/5 & 1/5  \end{tabular}\right)\left(\begin{tabular}{c}  75 \\ 25 \\ 0  \end{tabular}\right) = \left(\begin{tabular}{c}  3/4 * 75 + 0 * 25 + 2/5 * 0 \\  1/4 * 75 + 3/5 * 25 + 2/5 * 0 \\  0 * 75 + 2/5 * 25 + 1/5 * 0  \end{tabular}\right) = \left(\begin{tabular}{c}  56.25 \\  33.75 \\  10  \end{tabular}\right)

And if you don’t know just what would happen if we multipled A by that product, you aren’t paying attention.

This also gives a reason why matrix multiplication is defined this way. The operation captures neatly the operation of making a new thing — in the toy universe case, hydrogen or iron or uranium — out of some combination of fractions of an old thing — again, the former distribution of hydrogen and iron and uranium.

Or here’s another reason. Since this matrix A has three rows and three columns, you can multiply it by itself and get a matrix of three rows and three columns out of it. That matrix — which we can write as A2 — then describes how two billion years of nucleosynthesis would change the distribution of elements in the toy universe. A times A times A would give three billion years of nucleosynthesis; A10 ten billion years. The actual calculating of the numbers in these matrices may be tedious, but it describes a complicated operation very efficiently, which we always want to do.

I should mention another bit of notation. We usually use capital letters to represent matrices; but, a matrix that’s just got one column is also called a vector. That’s often written with a lowercase letter, with a little arrow above the letter, as in \vec{x} , or in bold typeface, as in x. (The arrows are easier to put in writing, the bold easier when you were typing on typewriters.) But if you’re doing a lot of writing this out, and know that (say) x isn’t being used for anything but vectors, then even that arrow or boldface will be forgotten. Then we’d write the product of matrix A and vector x as just Ax.  (There are also cases where you put a little caret over the letter; that’s to denote that it’s a vector that’s one unit of length long.)

When you start writing vectors without an arrow or boldface you start to run the risk of confusing what symbols mean scalars and what ones mean vectors. That’s one of the reasons that Greek letters are popular for scalars. It’s also common to put scalars to the left and vectors to the right. So if one saw “rMx”, it would be expected that r is a scalar, M a matrix, and x a vector, and if they’re not then this should be explained in text nearby, preferably before the equations. (And of course if it’s work you’re doing, you should know going in what you mean the letters to represent.)

The Geometry of Thermodynamics (Part 1)


Joseph Nebus:

I should mention that Peter Mander’s Carnot Cycle blog has a fine entry, “The Geometry of Thermodynamics (Part I)” which admittedly opens with a diagram that looks like the sort of thing you create when you want to present a horrifying science diagram. That’s a bit of flavor.

Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.

This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.

Originally posted on carnotcycle:

1geo1

Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.

In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbs’ magnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.

But in Gibbs’ case, this is far from the truth.

The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II

View original 1,491 more words

In the Overlap between Logic, Fun, and Information


Joseph Nebus:

Since I do need to make up for my former ignorance of John Venn’s diagrams and how to use them, let me join in what looks early on like a massive Internet swarm of mentions of Venn. The Daily Nous, a philosophy-news blog, was my first hint that anything interesting was going on (as my love is a philosopher and is much more in tune with the profession than I am with mathematics), and I appreciate the way they describe Venn’s interesting properties. (Also, for me at least, that page recommends I read Dungeons and Dragons and Derrida, itself pointing to an installment of philosophy-based web comic Existentialist Comics, so you get a sense of how things go over there.)

And then a friend retweeted the above cartoon (available as T-shirt or hoodie), which does indeed parse as a Venn diagram if you take the left circle as representing “things with flat tails playing guitar-like instruments” and the right circle as representing “things with duck bills playing keyboard-like instruments”. Remember — my love is “very picky” about Venn diagram jokes — the intersection in a Venn diagram is not a blend of the things in the two contributing circles, but is rather, properly, something which belongs to both the groups of things.

The 4th of is also William Rowan Hamilton’s birthday. He’s known for the discovery of quaternions, which are kind of to complex-valued numbers what complex-valued numbers are to the reals, but they’re harder to make a fun Google Doodle about. Quaternions are a pretty good way of representing rotations in a three-dimensional space, but that just looks like rotating stuff on the computer screen.

Originally posted on Daily Nous:

John Venn, an English philosopher who spent much of his career at Cambridge, died in 1923, but if he were alive today he would totally be dead, as it is his 180th birthday. Venn was named after the Venn diagram, owing to the fact that as a child he was terrible at math but good at drawing circles, and so was not held back in 5th grade. In celebration of this philosopher’s birthday Google has put up a fun, interactive doodle — just for today. Check it out.

Note: all comments on this post must be in Venn Diagram form.

View original

July 2014 in Mathematics Blogging


We’ve finally reached the kalends of August so I can look back at the mathematics blog statistics for June and see how they changed in July. Mostly it’s a chance to name countries that had anybody come read entries here, which is strangely popular. I don’t know why.

Since I’d had 16,174 page views total at the start of July I figured I wasn’t going to cross the symbolically totally important 17,000 by the start of August and what do you know but I was right, I didn’t. I did have a satisfying 589 page views (for a total of 16,763), which doesn’t quite reach May’s heights but is a step up from June’s 492 views. The number of unique visitors as WordPress figures it was 231, up from June’s 194. That’s not an unusually large or small number of unique visitors for this year, and it keeps the views per visitor just about unchanged, 2.55 as opposed to June’s 2.54.

July’s most popular postings were mostly mathematics comics ones — well, they have the most reader-friendly hook after all, and often include a comic or two — but I’m gratified by what proved to be the month’s most popular since I like it too:

  1. To Build A Universe, and my simple toy version of an arbitrarily old universe. This builds on In A Really Old Universe and on What’s Going On In The Old Universe, and is followed by Lewis Carroll And My Playing With Universes, also some popular posts.
  2. Reading the Comics, July 3, 2014: Wulff and Morgenthaler Edition, I suppose because WuMo is a really popular comic strip these days.
  3. Reading the Comics, July 28, 2014: Homework in an Amusement Park Edition, I suppose because everybody likes amusement parks these days.
  4. Reading the Comics, July 24, 2014: Math Is Just Hard Stuff, Right? Edition, I suppose because people like thinking mathematics is hard these days.
  5. Some Things About Joseph Nebus, because I guess I had a sudden onset of being interesting?
  6. Reading the Comics, July 18, 2014: Summer Doldrums Edition, because summer gets to us all these days.

The countries sending me the most readers this month were the United States (369 views), the United Kingdom (43 views), and the Philippines (24 views). Australia, Austria, Canada, and Singapore turned up well too. Sending just a single viewer this month were Greece, Hong Kong, Italy, Japan, Norway, Puerto Rico, and Spain; Hong Kong and Japan were the only ones who did that in June, and for that matter May also. My Switzerland reader from June had a friend this past month.

Among the search terms that brought people to me this month:

  • comics strips for differential calculus
  • nebus on starwars
  • 82 % what do i need on my finalti get a c
  • what 2 monsters on monster legends make dark nebus

  • (this seems like an ominous search query somehow)
  • the 80s cartoon character who sees mathematics equations
  • starwars nebus
    (suddenly this Star Wars/Me connection seems ominous)
  • origin is the gateway to your entire gaming universe
    (I can’t argue with that)

Reading the Comics, July 28, 2014: Homework in an Amusement Park Edition


I don’t think my standards for mathematics content in comic strips are seriously lowering, but the strips do seem to be coming pretty often for the summer break. I admit I’m including one of these strips just because it lets me talk about something I saw at an amusement park, though. I have my weaknesses.

Harley Schwadron’s 9 to 5 (July 25) builds its joke around the ambiguity of saying a salary is six (or some other number) of figures, if you don’t specify what side of the decimal they’re on. That’s an ordinary enough gag, although the size of a number can itself be an interesting thing to know. The number of digits it takes to write a number down corresponds, roughly, with the logarithm of a number, and in the olden days a lot of computations depended on logarithms: multiplying two numbers is equivalent to adding their logarithms; dividing two numbers, subtracting their logarithms. And addition and subtraction are normally easier than multiplication and division. Similarly, raising one number to a power becomes multiplying one number by the logarithm of another, and multiplication is easier than exponentiation. So counting the number of digits in a number might be something anyway.

Steve Breen and Mike Thompson’s Grand Avenue (July 25) has the kids mention something as being “like going to an amusement park to do math homework”, which gives me a chance to share this incident. Last year my love and I were in the Cedar Point amusement park (in Sandusky, Ohio), and went to the coffee shop. We saw one guy sitting at a counter, with his laptop and a bunch of papers sprawled out, looking pretty much like we do when we’re grading papers, and we thought initially that it was so very sad that someone would be so busy at work that (we presumed) he couldn’t even really participate in the family expedition to the amusement park.

And then we remembered: not everybody lives a couple hours away from an amusement park. If we lived, say, fifteen minutes from a park we had season passes to, we’d certainly at least sometimes take our grading work to the park, so we could get it done in an environment we liked and reward ourselves for getting done with a couple roller coasters and maybe the Cedar Downs carousel (which is worth an entry around these parts anyway). To grade, anyway; I’d never have the courage to bring my laptop to the coffee shop. So I guess all I’m saying is, I have a context in which yes, I could imagine going to an amusement park to grade math homework at least.

Wulff and Morgenthaler Truth Facts (July 25) makes a Venn diagram joke in service of asserting that only people who don’t understand statistics would play the lottery. This is an understandable attitude of Wulff and Morgenthaler, and of many, many people who make the same claim. The expectation value — the amount you expect to win some amount, times the probability you will win that amount, minus the cost of the ticket — is negative for all but the most extremely oversized lottery payouts, and the most extremely oversized lottery payouts still give you odds of winning so tiny that you really aren’t hurting your chances by not buying a ticket. However, the smugness behind the attitude bothers me — I’m generally bothered by smugness — and jokes like this one contain the assumption that the only sensible way to live is a ruthless profit-and-loss calculation to life that even Jeremy Bentham might say is a bit much. For the typical person, buying a lottery ticket is a bit of a lark, a couple dollars of disposable income spent because, what the heck, it’s about what you’d spend on one and a third sodas and you aren’t that thirsty. Lottery pools with coworkers or friends make it a small but fun social activity, too. That something is a net loss of money does not mean it is necessarily foolish. (This isn’t to say it’s wise, either, but I’d generally like a little more sympathy for people’s minor bits of recreational foolishness.)

Marc Anderson’s Andertoons (July 27) does a spot of wordplay about the meaning of “aftermath”. I can’t think of much to say about this, so let me just mention that Florian Cajori’s A History of Mathematical Notations reports (section 201) that the + symbol for addition appears to trace from writing “et”, meaning and, a good deal and the letters merging together and simplifying from that. This seems plausible enough on its face, but it does cause me to reflect that the & symbol also is credited as a symbol born from writing “et” a lot. (Here, picture writing Et and letting the middle and lower horizontal strokes of the E merge with the cross bar and the lowest point of the t.)

Berkeley Breathed’s Bloom County (July 27, rerun from, I believe, July of 1988) is one of the earliest appearances I can remember of the Grand Unification appearing in popular culture, certainly in comic strips. Unifications have a long and grand history in mathematics and physics in explaining things which look very different by the same principles, with the first to really draw attention probably being Descartes showing that algebra and geometry could be understood as a single thing, and problems difficult in one field could be easy in the other. In physics, the most thrilling unification was probably the explaining of electricity, magnetism, and light as the same thing in the 19th century; being able to explain many varied phenomena with some simple principles is just so compelling. General relativity shows that we can interpret accelerations and gravitation as the same thing; and in the late 20th century, physicists found that it’s possible to use a single framework to explain both electromagnetism and the forces that hold subatomic particles together and that break them apart.

It’s not yet known how to explain gravity and quantum mechanics in the same, coherent, frame. It’s generally assumed they can be reconciled, although I suppose there’s no logical reason they have to be. Finding a unification — or a proof they can’t be unified — would certainly be one of the great moments of mathematical physics.

The idea of the grand unification theory as an explanation for everything is … well, fair enough. A grand unification theory should be able to explain what particles in the universe exist, and what forces they use to interact, and from there it would seem like the rest of reality is details. Perhaps so, but it’s a long way to go from a simple starting point to explaining something as complicated as a penguin. I guess what I’m saying is I doubt Oliver would notice the non-existence of Opus in the first couple pages of his work.

Thom Bluemel’s Birdbrains (July 28) takes us back to the origin of numbers. It also makes me realize I don’t know what’s the first number that we know of people discovering. What I mean is, it seems likely that humans are just able to recognize a handful of numbers, like one and two and maybe up to six or so, based on how babies and animals can recognize something funny if the counts of small numbers of things don’t make sense. And larger numbers were certainly known to antiquity; probably the fact that numbers keep going on forever was known to antiquity. And some special numbers with interesting or difficult properties, like pi or the square root of two, were known so long ago we can’t say who discovered them. But then there are numbers like the Euler-Mascheroni constant, which are known and recognized as important things, and we can say reasonably well who discovered them. So what is the first number with a known discoverer?

Lewis Carroll and my Playing With Universes


I wanted to explain what’s going on that my little toy universes with three kinds of elements changing to one another keep settling down to steady and unchanging distributions of stuff. I can’t figure a way to do that other than to introduce some actual mathematics notation, and I’m aware that people often find that sort of thing off-putting, or terrifying, or at the very least unnerving.

There’s fair reason to: the entire point of notation is to write down a lot of information in a way that’s compact or easy to manipulate. Using it at all assumes that the writer, and the reader, are familiar with enough of the background that they don’t have to have it explained at each reference. To someone who isn’t familiar with the topic, then, the notation looks like symbols written down without context and without explanation. It’s much like wandering into an Internet forum where all the local acronyms are unfamiliar, the in-jokes are heavy on the ground, and for some reason nobody actually spells out Dave Barry’s name in full.

Let me start by looking at the descriptions of my toy universe: it’s made up of a certain amount of hydrogen, a certain amount of iron, and a certain amount of uranium. Since I’m not trying to describe, like, where these elements are or how they assemble into toy stars or anything like that, I can describe everything that I find interesting about this universe with three numbers. I had written those out as “40% hydrogen, 35% iron, 25% uranium”, for example, or “10% hydrogen, 60% iron, 30% uranium”, or whatever the combination happens to be. If I write the elements in the same order each time, though, I don’t really need to add “hydrogen” and “iron” and “uranium” after the numbers, and if I’m always looking at percentages I don’t even need to add the percent symbol. I can just list the numbers and let the “percent hydrogen” or “percent iron” or “percent uranium” be implicit: “40, 35, 25”, for one universe’s distribution, or “10, 60, 30” for another.

Letting the position of where a number is written carry information is a neat and easy way to save effort, and when you notice what’s happening you realize it’s done all the time: it’s how writing the date as “7/27/14” makes any sense, or how a sports scoreboard might compactly describe the course of the game:

0 1 0   1 2 0   0 0 4   8 13 1
2 0 0   4 0 0   0 0 1   7 15 0

To use the notation you need to understand how the position encodes information. “7/27/14” doesn’t make sense unless you know the first number is the month, the second the day within the month, and the third the year in the current century, and that there’s an equally strong convention putting the day within the month first and the month in the year second presents hazards when the information is ambiguous. Reading the box score requires knowing the top row reflects the performance of the visitor’s team, the bottom row the home team, and the first nine columns count the runs by each team in each inning, while the last three columns are the total count of runs, hits, and errors by that row’s team.

When you put together the numbers describing something into a rectangular grid, that’s termed a matrix of numbers. The box score for that imaginary baseball game is obviously one, but it’s also a matrix if I just write the numbers describing my toy universe in a row, or a column:

40
35
25

or

10
60
30

If a matrix has just the one column, it’s often called a vector. If a matrix has the same number of rows as it has columns, it’s called a square matrix. Matrices and vectors are also usually written with either straight brackets or curled parentheses around them, left and right, but that’s annoying to do in HTML so please just pretend.

The matrix as mathematicians know it today got put into a logically rigorous form around 1850 largely by the work of James Joseph Sylvester and Arthur Cayley, leading British mathematicians who also spent time teaching in the United States. Both are fascinating people, Sylvester for his love of poetry and language and for an alleged incident while briefly teaching at the University of Virginia which the MacTutor archive of mathematician biographies, citing L S Feuer, describes so: “A student who had been reading a newspaper in one of Sylvester’s lectures insulted him and Sylvester struck him with a sword stick. The student collapsed in shock and Sylvester believed (wrongly) that he had killed him. He fled to New York where one os his elder brothers was living.” MacTutor goes on to give reasons why this story may be somewhat distorted, although it does suggest one solution to the problem of students watching their phones in class.

Cayley, meanwhile, competes with Leonhard Euler for prolific range in a mathematician. MacTutor cites him as having at least nine hundred published papers, covering pretty much all of modern mathematics, including work that would underlie quantum mechanics and non-Euclidean geometry. He wrote about 250 papers in the fourteen years he was working as a lawyer, which would by itself have made him a prolific mathematician. If you need to bluff your way through a mathematical conversation, saying “Cayley” and following it with any random noun will probably allow you to pass.

MathWorld mentions, to my delight, that Lewis Carroll, in his secret guise as Charles Dodgson, came in to the world of matrices in 1867 with an objection to the very word. In writing about them, Dodgson said, “”I am aware that the word `Matrix’ is already in use to express the very meaning for which I use the word `Block'; but surely the former word means rather the mould, or form, into which algebraical quantities may be introduced, than an actual assemblage of such quantities”. He’s got a fair point, really, but there wasn’t much to be done in 1867 to change the word, and it’s only gotten more entrenched since then.