# My Math Blog Statistics, August 2014

So, August 2014: it’s been a month that brought some interesting threads into my writing here. It’s also had slightly longer gaps in my writing than I quite like, because I’d just not had the time to do as much writing as I hoped. But that leaves the question of how this affected my readership: are people still sticking around and do they like what they see?

The number of unique readers around here, according to WordPress, rose slightly, from 231 in July to 255 in August. This doesn’t compare favorably to numbers like the 315 visitors in May, but still, it’s an increase. The total number of page views dropped from 589 in July to 561 in August and don’t think that the last few days of the month I wasn’t tempted to hit refresh a bunch of times. Anyway, views per visitor dropped from 2.55 to 2.20, which seems to be closer to my long-term average. And at some point in the month — I failed to track when — I reached my 17,000th reader, and got up to 17,323 by the end of the month. If I’m really interesting this month I could hit 18,000 by the end of September.

The countries sending me the most readers were, in first place, the ever-unsurprising United States (345). Second place was Spain (36) which did take me by surprise, and Puerto Rico was third (30). The United Kingdom, Austria, and Canada came up next so at least that’s all familiar enough, and India sent me a nice round dozen readers. I got a single reader from each of Argentina, Belgium, Brazil, Finland, Germany, Hong Kong, Indonesia, Latvia, Mexico, Romania, Serbia, South Korea, Sweden, Thailand, and Venezuela. The only country that also sent me a single reader in July was Hong Kong (which also sent a lone reader in June and in May), and going back over last month’s post revealed that Spain and Puerto Rico were single-reader countries in July. I don’t know what I did to become more interesting there in August but I’ll try to keep it going.

The most popular articles in August were:

I fear I lack any good Search Term Poetry this month. Actually the biggest search terms have been pretty rote ones, eg:

• trapezoid
• barney and clyde carl friedrich comic
• moment of inertia of cube around the longest diagonal
• where do negative numbers come from
• comic strip math cube of binomials

Actually, Gauss comic strips were searched for a lot. I’m sorry I don’t have more of them for folks, but have you ever tried to draw Gauss? I thought not. At least I had something relevant for the moment of inertia question even if I didn’t answer it completely.

# Reading the Comics, August 29, 2014: Recurring Jokes Edition

Well, I did say we were getting to the end of summer. It’s taken only a couple days to get a fresh batch of enough mathematics-themed comics to include here, although the majority of them are about mathematics in ways that we’ve seen before, sometimes many times. I suppose that’s fair; it’s hard to keep thinking of wholly original mathematics jokes, after all. When you’ve had one killer gag about “537”, it’s tough to move on to “539” and have it still feel fresh.

Tom Toles’s Randolph Itch, 2 am (August 27, rerun) presents Randolph suffering the nightmare of contracting a case of entropy. Entropy might be the 19th-century mathematical concept that’s most achieved popular recognition: everyone knows it as some kind of measure of how disorganized things are, and that it’s going to ever increase, and if pressed there’s maybe something about milk being stirred into coffee that’s linked with it. The mathematical definition of entropy is tied to the probability one will find whatever one is looking at in a given state. Work out the probability of finding a system in a particular state — having particles in these positions, with these speeds, maybe these bits of magnetism, whatever — and multiply that by the logarithm of that probability. Work out that product for all the possible ways the system could possibly be configured, however likely or however improbable, just so long as they’re not impossible states. Then add together all those products over all possible states. (This is when you become grateful for learning calculus, since that makes it imaginable to do all these multiplications and additions.) That’s the entropy of the system. And it applies to things with stunning universality: it can be meaningfully measured for the stirring of milk into coffee, to heat flowing through an engine, to a body falling apart, to messages sent over the Internet, all the way to the outcomes of sports brackets. It isn’t just body parts falling off.

Randy Glasbergen’s _The Better Half_ For the 28th of August, 2014.

Randy Glasbergen’s The Better Half (August 28) does the old joke about not giving up on algebra someday being useful. Do teachers in other subjects get this? “Don’t worry, someday your knowledge of the Panic of 1819 will be useful to you!” “Never fear, someday they’ll all look up to you for being able to diagram a sentence!” “Keep the faith: you will eventually need to tell someone who only speaks French that the notebook of your uncle is on the table of your aunt!”

Eric the Circle (August 28, by “Gilly” this time) sneaks into my pages again by bringing a famous mathematical symbol into things. I’d like to make a mention of the links between mathematics and music which go back at minimum as far as the Ancient Greeks and the observation that a lyre string twice as long produced the same note one octave lower, but lyres and strings don’t fit the reference Gilly was going for here. Too bad.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (August 28) is another strip to use a “blackboard full of mathematical symbols” as visual shorthand for “is incredibly smart stuff going on”. The symbols look to me like they at least started out as being meaningful — they’re the kinds of symbols I expect in describing the curvature of space, and which you can find by opening up a book about general relativity — though I’m not sure they actually stay sensible. (It’s not the kind of mathematics I’ve really studied.) However, work in progress tends to be sloppy, the rough sketch of an idea which can hopefully be made sound.

Anthony Blades’s Bewley (August 29) has the characters stare into space pondering the notion that in the vastness of infinity there could be another of them out there. This is basically the same existentially troublesome question of the recurrence of the universe in enough time, something not actually prohibited by the second law of thermodynamics and the way entropy tends to increase with the passing of time, but we have already talked about that.

# Reading the Comics, August 25, 2014: Summer Must Be Ending Edition

I’m sorry to admit that I can’t think of a unifying theme for the most recent round of comic strips which mention mathematical topics, other than that this is one of those rare instances of nobody mentioning infinite numbers of typing monkeys. I have to guess Comic Strip Master Command sent around a notice that summer vacation (in the United States) will be ending soon, so cartoonists should start practicing their mathematics jokes.

Tom Toles’s Randolph Itch, 2 a.m. (August 22, rerun) presents what’s surely the lowest-probability outcome of a toss of a fair coin: its landing on the edge. (I remember this as also the gimmick starting a genial episode of The Twilight Zone.) It’s a nice reminder that you do have to consider all the things that might affect an experiment’s outcome before concluding what are likely and unlikely results.

It also inspires, in me, a side question: a single coin, obviously, has a tiny chance of landing on its side. A roll of coins has a tiny chance of not landing on its side. How thick a roll has to be assembled before the chance of landing on the side and the chance of landing on either edge become equal? (Without working it out, my guess is it’s about when the roll of coins is as tall as it is across, but I wouldn’t be surprised if it were some slightly oddball thing like the roll has to be the square root of two times the diameter of the coins.)

Doug Savage’s Savage Chickens (August 22) presents an “advanced Sudoku”, in a puzzle that’s either trivially easy or utterly impossible: there’s so few constraints on the numbers in the presented puzzle that it’s not hard to write in digits that will satisfy the results, but, if there’s one right answer, there’s not nearly enough information to tell which one it is. I do find interesting the problem of satisfiability — giving just enough information to solve the puzzle, without allowing more than one solution to be valid — an interesting one. I imagine there’s a very similar problem at work in composing Ivasallay’s Find The Factors puzzles.

Phil Frank and Joe Troise’s The Elderberries (August 24, rerun) presents a “mind aerobics” puzzle in the classic mathematical form of drawing socks out of a drawer. Talking about pulling socks out of drawers suggests a probability puzzle, but the question actually takes it a different direction, into a different sort of logic, and asks about how many socks need to be taken out in order to be sure you have one of each color. The easiest way to apply this is, I believe, to use what’s termed the “pigeon hole principle”, which is one of those mathematical concepts so clear it’s hard to actually notice it. The principle is just that if you have fewer pigeon holes than you have pigeons, and put every pigeon in a pigeon hole, then there’s got to be at least one pigeon hole with more than one pigeons. (Wolfram’s MathWorld credits the statement to Peter Gustav Lejeune Dirichlet, a 19th century German mathematician with a long record of things named for him in number theory, probability, analysis, and differential equations.)

Dave Whamond’s Reality Check (August 24) pulls out the old little pun about algebra and former romantic partners. You’ve probably seen this joke passed around your friends’ Twitter or Facebook feeds too.

Julie Larson’s The Dinette Set (August 25) presents some terrible people’s definition of calculus, as “useless math with letters instead of numbers”, which I have to gripe about because that seems like a more on-point definition of algebra. I’m actually sympathetic to the complaint that calculus is useless, at least if you don’t go into a field that requires it (although that’s rather a circular definition, isn’t it?), but I don’t hold to the idea that whether something is “useful” should determine whether it’s worth learning. My suspicion is that things you find interesting are worth learning, either because you’ll find uses for them, or just because you’ll be surrounding yourself with things you find interesting.

Shifting from numbers to letters, as are used in algebra and calculus, is a great advantage. It allows you to prove things that are true for many problems at once, rather than just the one you’re interested in at the moment. This generality may be too much work to bother with, at least for some problems, but it’s easy to see what’s attractive in solving a problem once and for all.

Mikael Wulff and Anders Morgenthaler’s WuMo (August 25) uses a couple of motifs none of which I’m sure are precisely mathematical, but that seem close enough for my needs. First there’s the motif of Albert Einstein as just being so spectacularly brilliant that he can form an argument in favor of anything, regardless of whether it’s right or wrong. Surely that derives from Einstein’s general reputation of utter brilliance, perhaps flavored by the point that he was able to show how common-sense intuitive ideas about things like “it’s possible to say whether this event happened before or after that event” go wrong. And then there’s the motif of a sophistic argument being so massive and impressive in its bulk that it’s easier to just give in to it rather than try to understand or refute it.

It’s fair of the strip to present Einstein as beginning with questions about how one perceives the universe, though: his relativity work in many ways depends on questions like “how can you tell whether time has passed?” and “how can you tell whether two things happened at the same time?” These are questions which straddle physics, mathematics, and philosophy, and trying to find answers which are logically coherent and testable produced much of the work that’s given him such lasting fame.

# Machines That Think About Logarithms

I confess that I picked up Edmund Callis Berkeley’s Giant Brains: Or Machines That Think, originally published 1949, from the library shelf as a source of cheap ironic giggles. After all, what is funnier than an attempt to explain to a popular audience that, wild as it may be to contemplate, electrically-driven machines could “remember” information and follow “programs” of instructions based on different conditions satisfied by that information? There’s a certain amount of that, though not as much as I imagined, and a good amount of descriptions of how the hardware of different partly or fully electrical computing machines of the 1940s worked.

But a good part, and the most interesting part, of the book is about algorithms, the ways to solve complicated problems without demanding too much computing power. This is fun to read because it showcases the ingenuity and creativity required to do useful work. The need for ingenuity will never leave us — we will always want to compute things that are a little beyond our ability — but to see how it’s done for a simple problem is instructive, if for nothing else to learn the kinds of tricks you can do to get the most of your computing resources.

The example that most struck me and which I want to share is from the chapter on the IBM Automatic Sequence-Controlled Calculator, built at Harvard at a cost of “somewhere near 3 or 4 hundred thousand dollars, if we leave out some of the cost of research and development, which would have been done whether or not this particular machine had ever been built”. It started working in April 1944, and wasn’t officially retired until 1959. It could store 72 numbers, each with 23 decimal digits. Like most computers (then and now) it could do addition and subtraction very quickly, in the then-blazing speed of about a third of a second; it could do multiplication tolerably quickly, in about six seconds; and division, rather slowly, in about fifteen seconds.

The process I want to describe is the taking of logarithms, and why logarithms should be interesting to compute takes a little bit of justification, although it’s implicitly there just in how fast calculations get done. Logarithms let one replace the multiplication of numbers with their addition, for a considerable savings in time; better, they let you replace the division of numbers with subtraction. They further let you turn exponentiation and roots into multiplication and division, which is almost always faster to do. Many human senses seem to work on a logarithmic scale, as well: we can tell that one weight is twice as heavy as the other much more reliably than we can tell that one weight is four pounds heavier than the other, or that one light is twice as bright as the other rather than is ten lumens brighter.

What the logarithm of a number is depends on some other, fixed, quantity, known as the base. In principle any positive number will do as base; in practice, these days people mostly only care about base e (which is a little over 2.718), the “natural” logarithm, because it has some nice analytic properties. Back in the day, which includes when this book was written, we also cared about base 10, the “common” logarithm, because we mostly work in base ten. I have heard of people who use base 2, but haven’t seen them myself and must regard them as an urban legend. The other bases are mostly used by people who are writing homework problems for the part of the class dealing with logarithms. To some extent it doesn’t matter what base you use. If you work out the logarithm in one base, you can convert that to the logarithm in another base by a multiplication.

The logarithm of some number in your base is the exponent you have to raise the base to to get your desired number. For example, the logarithm of 100, in base 10, is going to be 2 because 102 is 100, and the logarithm of e1/3 (a touch greater than 1.3956), in base e, is going to be 1/3. To dig deeper in my reserve of in-jokes, the logarithm of 2038, in base 10, is approximately 3.3092, because 103.3092 is just about 2038. The logarithm of e, in base 10, is about 0.4343, and the logarithm of 10, in base e, is about 2.303. Your calculator will verify all that.

All that talk about “approximately” should have given you some hint of the trouble with logarithms. They’re only really easy to compute if you’re looking for whole powers of whatever your base is, and that if your base is 10 or 2 or something else simple like that. If you’re clever and determined you can work out, say, that the logarithm of 2, base 10, has to be close to 0.3. It’s fun to do that, but it’ll involve such reasoning as “two to the tenth power is 1,024, which is very close to ten to the third power, which is 1,000, so therefore the logarithm of two to the tenth power must be about the same as the logarithm of ten to the third power”. That’s clever and fun, but it’s hardly systematic, and it doesn’t get you many digits of accuracy.

So when I pick up this thread I hope to explain one way to produce as many decimal digits of a logarithm as you could want, without asking for too much from your poor Automatic Sequence-Controlled Calculator.

# Writing About E (Not By Me)

It’s tricky to write about $e$. That is, it’s not a difficult thing to write about, but it’s hard to find the audience for this number. It’s quite important, mathematically, but it hasn’t got an easy-to-understand definition like pi’s “the circumference of a circle divided by its diameter”. E’s most concise definition, I guess, is “the base of the natural logarithm”, which as an explanation to someone who hasn’t done much mathematics is only marginally more enlightening than slapping him with a slice of cold pizza. And it hasn’t got the sort of renown of something like the golden ratio which makes the number sound familiar and even welcoming.

Still, the Mean Green Math blog (“Explaining the whys of mathematics”) has been running a series of essays explaining $e$, by looking at different definitions of the number. The most recent of this has been the twelfth in the series, and they seem to be arranged in chronological order under the category of Algebra II topics, and under the tag of “E” essays, although I can’t promise how long it’ll be before you have to flip through so many “older” page links on the category and tag pages that it’s harder to find that way. If I see a master page collecting all the Definitions Of E essays into one guide I’ll post that.

# Reading the Comics, August 16, 2014: Saturday Morning Breakfast Cereal Edition

Zach Weinersmith’s Saturday Morning Breakfast Cereal is a long-running and well-regarded web comic that I haven’t paid much attention to because I don’t read many web comics. XKCD, Newshounds, and a couple others are about it. I’m not opposed to web comics, mind you, I just don’t get around to following them typically. But Saturday Morning Breakfast Cereal started running on Gocomics.com recently, and Gocomics makes it easy to start adding comics, and I did, and that’s served me well for the mathematical comics collections since it’s been a pretty dry spell. I bet it’s the summer vacation.

Saturday Morning Breakfast Cereal (July 30) seems like a reach for inclusion in mathematical comics since its caption is “Physicists make lousy firemen” and it talks about the action of a fire — and of the “living things” caught in the fire — as processes producing wobbling and increases in disorder. That’s an effort at describing a couple of ideas, the first that the temperature of a thing is connected to the speed at which the molecules making it up are moving, and the second that the famous entropy is a never-decreasing quantity. We get these notions from thermodynamics and particularly the attempt to understand physically important quantities like heat and temperature in terms of particles — which have mass and position and momentum — and their interactions. You could write an entire blog about entropy and probably someone does.

Randy Glasbergen’s Glasbergen Cartoons (August 2) uses the word-problem setup for a strip of “Dog Math” and tries to remind everyone teaching undergraduates the quotient rule that it really could be worse, considering.

Nate Fakes’s Break of Day (August 4) takes us into an anthropomorphized world that isn’t numerals for a change, to play on the idea that skill in arithmetic is evidence of particular intelligence.

George McManus’s _Bringing Up Father_, originally run the 12th of April, 1949.

George McManus’s Bringing Up Father (August 11, rerun from April 12, 1949) goes to the old motif of using money to explain addition problems. It’s not a bad strategy, of course: in a way, arithmetic is one of the first abstractions one does, in going from the idea that a hundred of something added to a hundred fifty of something will yield two hundred fifty of that thing, and it doesn’t matter what that something is: you’ve abstracted out the ideas of “a hundred plus a hundred fifty”. In algebra we start to think about whether we can add together numbers without knowing what one or both of the numbers are — “x plus y” — and later still we look at adding together things that aren’t necessarily numbers.

And back to Saturday Morning Breakfast Cereal (August 13), which has a physicist type building a model of his “lack of dates” based on random walks and, his colleague objects, “only works if we assume you’re an ideal gas molecule”. But models are often built on assumptions that might, taken literally, be nonsensical, like imagining the universe to have exactly three elements in it, supposing that people never act against their maximal long-term economic gain, or — to summon a traditional mathematics/physics joke — assuming a spherical cow. The point of a model is to capture some interesting behavior, and avoid the complicating factors that can’t be dealt with precisely or which don’t relate to the behavior being studied. Choosing how to simplify is the skill and art that earns mathematicians the big money.

And then for August 16, Saturday Morning Breakfast Cereal does a binary numbers joke. I confess my skepticism that there are any good alternate-base-number jokes, but you might like them.

# Combining Matrices And Model Universes

I would like to resume talking about matrices and really old universes and the way nucleosynthesis in these model universes causes atoms to keep settling down to peculiar but unchanging distribution.

I’d already described how a matrix offers a nice way to organize elements, and in ways that encode information about the context of the elements by where they’re placed. That’s useful and saves some writing, certainly, although by itself it’s not that interesting. Matrices start to get really powerful when, first, the elements being stored are things on which you can do something like arithmetic with pairs of them. Here I mostly just mean that you can add together two elements, or multiply them, and get back something meaningful.

This typically means that the matrix is made up of a grid of numbers, although that isn’t actually required, just, really common if we’re trying to do mathematics.

Then you get the ability to add together and multiply together the matrices themselves, turning pairs of matrices into some new matrix, and building something that works a lot like arithmetic on these matrices.

Adding one matrix to another is done in almost the obvious way: add the element in the first row, first column of the first matrix to the element in the first row, first column of the second matrix; that’s the first row, first column of your new matrix. Then add the element in the first row, second column of the first matrix to the element in the first row, second column of the second matrix; that’s the first row, second column of the new matrix. Add the element in the second row, first column of the first matrix to the element in the second row, first column of the second matrix, and put that in the second row, first column of the new matrix. And so on.

This means you can only add together two matrices that are the same size — the same number of rows and of columns — but that doesn’t seem unreasonable.

You can also do something called scalar multiplication of a matrix, in which you multiply every element in the matrix by the same number. A scalar is just a number that isn’t part of a matrix. This multiplication is useful, not least because it lets us talk about how to subtract one matrix from another: to find the difference of the first matrix and the second, scalar-multiply the second matrix by -1, and then add the first to that product. But you can do scalar multiplication by any number, by two or minus pi or by zero if you feel like it.

I should say something about notation. When we want to write out these kinds of operations efficiently, of course, we turn to symbols to represent the matrices. We can, in principle, use any symbols, but by convention a matrix usually gets represented with a capital letter, A or B or M or P or the like. So to add matrix A to matrix B, with the result being matrix C, we can write out the equation “A + B = C”, which is about as simple as we could hope to see. Scalars are normally written in lowercase letters, often Greek letters, if we don’t know what the number is, so that the scalar multiplication of the number r and the matrix A would be the product “rA”, and we could write the difference between matrix A and matrix B as “A + (-1)B” or “A – B”.

Matrix multiplication, now, that is done by a process that sounds like doubletalk, and it takes a while of practice to do it right. But there are good reasons for doing it that way and we’ll get to one of those reasons by the end of this essay.

To multiply matrix A and matrix B together, we do multiply various pairs of elements from both matrix A and matrix B. The surprising thing is that we also add together sets of these products, per this rule.

Take the element in the first row, first column of A, and multiply it by the element in the first row, first column of B. Add to that the product of the element in the first row, second column of A and the second row, first column of B. Add to that total the product of the element in the first row, third column of A and the third row, second column of B, and so on. When you’ve run out of columns of A and rows of B, this total is the first row, first column of the product of the matrices A and B.

Plenty of work. But we have more to do. Take the product of the element in the first row, first column of A and the element in the first row, second column of B. Add to that the product of the element in the first row, second column of A and the element in the second row, second column of B. Add to that the product of the element in the first row, third column of A and the element in the third row, second column of B. And keep adding those up until you’re out of columns of A and rows of B. This total is the first row, second column of the product of matrices A and B.

This does mean that you can multiply matrices of different sizes, provided the first one has as many columns as the second has rows. And the product may be a completely different size from the first or second matrices. It also means it might be possible to multiply matrices in one order but not the other: if matrix A has four rows and three columns, and matrix B has three rows and two columns, then you can multiply A by B, but not B by A.

My recollection on learning this process was that this was crazy, and the workload ridiculous, and I imagine people who get this in Algebra II, and don’t go on to using mathematics later on, remember the process as nothing more than an unpleasant blur of doing a lot of multiplying and addition for some reason or other.

So here is one of the reasons why we do it this way. Let me define two matrices:

$A = \left(\begin{tabular}{c c c} 3/4 & 0 & 2/5 \\ 1/4 & 3/5 & 2/5 \\ 0 & 2/5 & 1/5 \end{tabular}\right)$

$B = \left(\begin{tabular}{c} 100 \\ 0 \\ 0 \end{tabular}\right)$

Then matrix A times B is

$AB = \left(\begin{tabular}{c} 3/4 * 100 + 0 * 0 + 2/5 * 0 \\ 1/4 * 100 + 3/5 * 0 + 2/5 * 0 \\ 0 * 100 + 2/5 * 0 + 1/5 * 0 \end{tabular}\right) = \left(\begin{tabular}{c} 75 \\ 25 \\ 0 \end{tabular}\right)$

You’ve seen those numbers before, of course: the matrix A contains the probabilities I put in my first model universe to describe the chances that over the course of a billion years a hydrogen atom would stay hydrogen, or become iron, or become uranium, and so on. The matrix B contains the original distribution of atoms in the toy universe, 100 percent hydrogen and nothing anything else. And the product of A and B was exactly the distribution after that first billion years: 75 percent hydrogen, 25 percent iron, nothing uranium.

If we multiply the matrix A by that product again — well, you should expect we’re going to get the distribution of elements after two billion years, that is, 56.25 percent hydrogen, 33.75 percent iron, 10 percent uranium, but let me write it out anyway to show:

$\left(\begin{tabular}{c c c} 3/4 & 0 & 2/5 \\ 1/4 & 3/5 & 2/5 \\ 0 & 2/5 & 1/5 \end{tabular}\right)\left(\begin{tabular}{c} 75 \\ 25 \\ 0 \end{tabular}\right) = \left(\begin{tabular}{c} 3/4 * 75 + 0 * 25 + 2/5 * 0 \\ 1/4 * 75 + 3/5 * 25 + 2/5 * 0 \\ 0 * 75 + 2/5 * 25 + 1/5 * 0 \end{tabular}\right) = \left(\begin{tabular}{c} 56.25 \\ 33.75 \\ 10 \end{tabular}\right)$

And if you don’t know just what would happen if we multipled A by that product, you aren’t paying attention.

This also gives a reason why matrix multiplication is defined this way. The operation captures neatly the operation of making a new thing — in the toy universe case, hydrogen or iron or uranium — out of some combination of fractions of an old thing — again, the former distribution of hydrogen and iron and uranium.

Or here’s another reason. Since this matrix A has three rows and three columns, you can multiply it by itself and get a matrix of three rows and three columns out of it. That matrix — which we can write as A2 — then describes how two billion years of nucleosynthesis would change the distribution of elements in the toy universe. A times A times A would give three billion years of nucleosynthesis; A10 ten billion years. The actual calculating of the numbers in these matrices may be tedious, but it describes a complicated operation very efficiently, which we always want to do.

I should mention another bit of notation. We usually use capital letters to represent matrices; but, a matrix that’s just got one column is also called a vector. That’s often written with a lowercase letter, with a little arrow above the letter, as in $\vec{x}$, or in bold typeface, as in x. (The arrows are easier to put in writing, the bold easier when you were typing on typewriters.) But if you’re doing a lot of writing this out, and know that (say) x isn’t being used for anything but vectors, then even that arrow or boldface will be forgotten. Then we’d write the product of matrix A and vector x as just Ax.  (There are also cases where you put a little caret over the letter; that’s to denote that it’s a vector that’s one unit of length long.)

When you start writing vectors without an arrow or boldface you start to run the risk of confusing what symbols mean scalars and what ones mean vectors. That’s one of the reasons that Greek letters are popular for scalars. It’s also common to put scalars to the left and vectors to the right. So if one saw “rMx”, it would be expected that r is a scalar, M a matrix, and x a vector, and if they’re not then this should be explained in text nearby, preferably before the equations. (And of course if it’s work you’re doing, you should know going in what you mean the letters to represent.)

# The Geometry of Thermodynamics (Part 1)

I should mention that Peter Mander’s Carnot Cycle blog has a fine entry, “The Geometry of Thermodynamics (Part I)” which admittedly opens with a diagram that looks like the sort of thing you create when you want to present a horrifying science diagram. That’s a bit of flavor.

Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.

This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.

Originally posted on carnotcycle:

Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.

In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbs’ magnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.

But in Gibbs’ case, this is far from the truth.

The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II

View original 1,491 more words

# In the Overlap between Logic, Fun, and Information

Since I do need to make up for my former ignorance of John Venn’s diagrams and how to use them, let me join in what looks early on like a massive Internet swarm of mentions of Venn. The Daily Nous, a philosophy-news blog, was my first hint that anything interesting was going on (as my love is a philosopher and is much more in tune with the profession than I am with mathematics), and I appreciate the way they describe Venn’s interesting properties. (Also, for me at least, that page recommends I read Dungeons and Dragons and Derrida, itself pointing to an installment of philosophy-based web comic Existentialist Comics, so you get a sense of how things go over there.)

And then a friend retweeted the above cartoon (available as T-shirt or hoodie), which does indeed parse as a Venn diagram if you take the left circle as representing “things with flat tails playing guitar-like instruments” and the right circle as representing “things with duck bills playing keyboard-like instruments”. Remember — my love is “very picky” about Venn diagram jokes — the intersection in a Venn diagram is not a blend of the things in the two contributing circles, but is rather, properly, something which belongs to both the groups of things.

The 4th of is also William Rowan Hamilton’s birthday. He’s known for the discovery of quaternions, which are kind of to complex-valued numbers what complex-valued numbers are to the reals, but they’re harder to make a fun Google Doodle about. Quaternions are a pretty good way of representing rotations in a three-dimensional space, but that just looks like rotating stuff on the computer screen.

Originally posted on Daily Nous:

John Venn, an English philosopher who spent much of his career at Cambridge, died in 1923, but if he were alive today he would totally be dead, as it is his 180th birthday. Venn was named after the Venn diagram, owing to the fact that as a child he was terrible at math but good at drawing circles, and so was not held back in 5th grade. In celebration of this philosopher’s birthday Google has put up a fun, interactive doodle — just for today. Check it out.

Note: all comments on this post must be in Venn Diagram form.

View original

# July 2014 in Mathematics Blogging

We’ve finally reached the kalends of August so I can look back at the mathematics blog statistics for June and see how they changed in July. Mostly it’s a chance to name countries that had anybody come read entries here, which is strangely popular. I don’t know why.

Since I’d had 16,174 page views total at the start of July I figured I wasn’t going to cross the symbolically totally important 17,000 by the start of August and what do you know but I was right, I didn’t. I did have a satisfying 589 page views (for a total of 16,763), which doesn’t quite reach May’s heights but is a step up from June’s 492 views. The number of unique visitors as WordPress figures it was 231, up from June’s 194. That’s not an unusually large or small number of unique visitors for this year, and it keeps the views per visitor just about unchanged, 2.55 as opposed to June’s 2.54.

July’s most popular postings were mostly mathematics comics ones — well, they have the most reader-friendly hook after all, and often include a comic or two — but I’m gratified by what proved to be the month’s most popular since I like it too:

1. To Build A Universe, and my simple toy version of an arbitrarily old universe. This builds on In A Really Old Universe and on What’s Going On In The Old Universe, and is followed by Lewis Carroll And My Playing With Universes, also some popular posts.
2. Reading the Comics, July 3, 2014: Wulff and Morgenthaler Edition, I suppose because WuMo is a really popular comic strip these days.
3. Reading the Comics, July 28, 2014: Homework in an Amusement Park Edition, I suppose because everybody likes amusement parks these days.
4. Reading the Comics, July 24, 2014: Math Is Just Hard Stuff, Right? Edition, I suppose because people like thinking mathematics is hard these days.
5. Some Things About Joseph Nebus, because I guess I had a sudden onset of being interesting?
6. Reading the Comics, July 18, 2014: Summer Doldrums Edition, because summer gets to us all these days.

The countries sending me the most readers this month were the United States (369 views), the United Kingdom (43 views), and the Philippines (24 views). Australia, Austria, Canada, and Singapore turned up well too. Sending just a single viewer this month were Greece, Hong Kong, Italy, Japan, Norway, Puerto Rico, and Spain; Hong Kong and Japan were the only ones who did that in June, and for that matter May also. My Switzerland reader from June had a friend this past month.

Among the search terms that brought people to me this month:

• comics strips for differential calculus
• nebus on starwars
• 82 % what do i need on my finalti get a c
• what 2 monsters on monster legends make dark nebus

• (this seems like an ominous search query somehow)
• the 80s cartoon character who sees mathematics equations
• starwars nebus
(suddenly this Star Wars/Me connection seems ominous)
• origin is the gateway to your entire gaming universe
(I can’t argue with that)