As much as everything is still happening, and so much, there’s still comic strips. I’m fortunately able here to focus just on the comics that discuss some mathematical theme, so let’s get started in exploring last week’s reading. Worth deeper discussion are the comics that turn up here all the time.
Lincoln Peirce’s Big Nate for the 5th is a casual mention. Nate wants to get out of having to do his mathematics homework. This really could be any subject as long as it fit the word balloon.
Not much to talk about there. But there is a fascinating thing about perimeters that you learn if you go far enough in Calculus. You have to get into multivariable calculus, something where you integrate a function that has at least two independent variables. When you do this, you can find the integral evaluated over a curve. If it’s a closed curve, something that loops around back to itself, then you can do something magic. Integrating the correct function on the curve around a shape will tell you the enclosed area.
And this is an example of one of the amazing things in multivariable calculus. It tells us that integrals over a boundary can tell us something about the integral within a volume, and vice-versa. It can be worth figuring out whether your integral is better solved by looking at the boundaries or at the interiors.
Heron’s Formula, for the area of a triangle based on the lengths of its sides, is an expression of this calculation. I don’t know of a formula exactly like that for the perimeter of a quadrilateral, but there are similar formulas if you know the lengths of the sides and of the diagonals.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 5th depicts, fairly, the sorts of things that excite mathematicians. The number discussed here is about algorithmic complexity. This is the study of how long it takes to do an algorithm. How long always depends on how big a problem you are working on; to sort four items takes less time than sorting four million items. Of interest here is how much the time to do work grows with the size of whatever you’re working on.
The mathematician’s particular example, and I thank dtpimentel in the comments for finding this, is about the Coppersmith–Winograd algorithm. This is a scheme for doing matrix multiplication, a particular kind of multiplication and addition of squares of numbers. The squares have some number N rows and N columns. It’s thought that there exists some way to do matrix multiplication in the order of N2 time, that is, if it takes 10 time units to multiply matrices of three rows and three columns together, we should expect it takes 40 time units to multiply matrices of six rows and six columns together. The matrix multiplication you learn in linear algebra takes on the order of N3 time, so, it would take like 80 time units.
We don’t know the way to do that. The Coppersmith–Winograd algorithm was thought, after Virginia Vassilevska Williams’s work in 2011, to take something like N2.3728642 steps. So that six-rows-six-columns multiplication would take slightly over 51.796 844 time units. In 2014, François le Gall found it was no worse than N2.3728639 steps, so this would take slightly over 51.796 833 time units. The improvement doesn’t seem like much, but on tiny problems it never does. On big problems, the improvement’s worth it. And, sometimes, you make a good chunk of progress at once.
Today, I’m just listing the comics from last week that mentioned mathematics, but which didn’t raise a deep enough topic to be worth discussing. You know what a story problem looks like. I can’t keep adding to that.
Hector D. Cantú and Carlos Castellanos’s Baldo for the 10th quotes René Descartes, billing him as a “French mathematician”. Which is true, but the quote is one about living properly. That’s more fairly a philosophical matter. Descartes has some reputation for his philosophical work, I understand.
John Hambrock’s The Brilliant Mind of Edison Lee for the 1st of October is a calendar joke. Well, many of the months used to have names that denoted their count. Month names have changed more than you’d think. For a while there every Roman Emperor was renaming months after himself. Most of these name changes did not stick. Lucius Aurelius Commodus, who reined from 177 to 192, gave all twelve months one or another of his names.
Several of the mathematically-themed comic strips from last week featured the fine art of calculation. So that was set to be my title for this week. Then I realized that all the comics worth some detailed mention were published last Sunday, and I do like essays that are entirely one-day affairs. There are a couple of other comic strips that mentioned mathematics tangentially and I’ll list those later this week.
John Hambrock’s The Brilliant Mind of Edison lee for the 29th has Edison show off an organic computer. This is a person, naturally enough. Everyone can do some arithmetic in their heads, especially if we allow that sometimes approximate answers are often fine. People with good speed and precision have always been wonders, though. The setup may also riff on the ancient joke of mathematicians being ways to turn coffee into theorems. (I would imagine that Hambrock has heard that joke. But it is enough to suppose that he’s aware many adult humans drink coffee.)
John Kovaleski’s Daddy Daze for the 29th sees Paul, the dad, working out the calculations his son (Angus) proposed. It’s a good bit of arithmetic that Paul’s doing in his head. The process of multiplying an insubstantial thing by many, many times until you get something of moderate size happens all the time. Much of integral calculus is based on the idea that we can add together infinitely many infinitesimal numbers, and from that get something understandable on the human scale. Saving nine seconds every other day is useless for actual activities, though. You need a certain fungibility in the thing conserved for the bother to be worth it.
Dan Thompson’s Harley for the 29th gets us into some comic strips not drawn by people named John. The comic has some mathematics in it qualitatively. The observation that you could jump a motorcycle farther, or higher, with more energy, and that you can get energy from rolling downhill. It’s here mostly because of the good fortune that another comic strip did a joke on the same topic, and did it quantitatively. That comic?
Bill Amend’s FoxTrot for the 29th. Young prodigies Jason and Marcus are putting serious calculation into their Hot Wheels track and working out the biggest loop-the-loop possible from a starting point. Their calculations are right, of course. Bill Amend, who’d been a physics major, likes putting authentic mathematics and mathematical physics in. The key is making sure the car moves fast enough in the loop that it stays on the track. This means the car experiencing a centrifugal force that’s larger than that of gravity. The centrifugal force on something moving in a circle is proportional to the square of the thing’s speed, and inversely proportional to the radius of the circle. This for a circle in any direction, by the way.
So they need to know, if the car starts at the height A, how fast will it go at the top of the loop, at height B? If the car’s going fast enough at height B to stay on the track, it’s certainly going fast enough to stay on for the rest of the loop.
The hard part would be figuring the speed at height B. Or it would be hard if we tried calculating the forces, and thus acceleration, of the car along the track. This would be a tedious problem. It would depend on the exact path of the track, for example. And it would be a long integration problem, which is trouble. There aren’t many integrals we can actually calculate directly. Most of the interesting ones we have to do numerically or work on approximations of the actual thing. This is all right, though. We don’t have to do that integral. We can look at potential energy instead. This turns what would be a tedious problem into the first three lines of work. And one of those was “Kinetic Energy = Δ Potential Energy”.
But as Peter observes, this does depend on supposing the track is frictionless. We always do this in basic physics problems. Friction is hard. It does depend on the exact path one follows, for example. And it depends on speed in complicated ways. We can make approximations to allow for friction losses, often based in experiment. Or try to make the problem one that has less friction, as Jason and Marcus are trying to do.
Mark Anderson’s Andertoons for the 18th is the Mark Anderson’s Andertoons for the week. This features the kids learning some of the commonest terms in descriptive statistics. And, as Wavehead says, the similarity of names doesn’t help sorting them out. Each is a kind of average. “Mean” usually is the arithmetic mean, or the thing everyone including statisticians calls “average”. “Median” is the middle-most value, the one that half the data is less than and half the data is greater than. “Mode” is the most common value. In “normally distributed” data, these three quantities are all the same. In data gathered from real-world measurements, these are typically pretty close to one another. It’s very easy for real-world quantities to be normally distributed. The exceptions are usually when there are some weird disparities, like a cluster of abnormally high-valued (or low-valued) results. Or if there are very few data points.
The word “mean” derives from the Old French “meien”, that is, “middle, means”. And that itself traces to the Late Latin “medianus”, and the Latin “medius”. That traces back to the Proto-Indo-European “medhyo”, meaning “middle”. That’s probably what you might expect, especially considering that the mean of a set of data is, if the data is not doing anything weird, likely close to the middle of the set. The term appeared in English in the middle 15th century.
The word “median”, meanwhile, follows a completely different path. That one traces to the Middle French “médian”, which traces to the Late Latin “medianus” and Latin “medius” and Proto-Indo-European “medhyo”. This appeared as a mathematical term in the late 19th century; Etymology Online claims 1883, but doesn’t give a manuscript citation.
The word “mode”, meanwhile, follows a completely different path. This one traces to the Old French “mode”, itself from the Latin “modus”, meaning the measure or melody or style. We get from music to common values by way of the “style” meaning. Think of something being done “á la mode”, that is, “in the [ fashionable or popular ] style”. I haven’t dug up a citation about when this word entered the mathematical parlance.
So “mean” and “median” don’t have much chance to do anything but alliterate. “Mode” is coincidence here. I agree, it might be nice if we spread out the words a little more.
John Hambrock’s The Brilliant Mind of Edison Lee for the 18th has Edison introduce a sequence to his grandfather. Doubling the number of things for each square of a checkerboard is an ancient thought experiment. The notion, with grains of wheat rather than cookies, seems to be first recorded in 1256 in a book by the scholar Ibn Khallikan. One story has it that the inventor of chess requested from the ruler that many grains of wheat as reward for inventing the game.
If we followed Edison Lee’s doubling through all 64 squares we’d have, in total, need for 263-1 or 18,446,744,073,709,551,615 cookies. You can see why the inventor of chess didn’t get that reward, however popular the game was. It stands as a good display of how exponential growth eventually gets to be just that intimidatingly big.
Edison, like many a young nerd, is trying to stagger his grandfather with the enormity of this. I don’t know that it would work. Grandpa ponders eating all that many cookies, since he’s a comical glutton. I’d estimate eating all that many cookies, at the rate of one a second, eight hours a day, to take something like eighteen billion centuries. If I’m wrong? It doesn’t matter. It’s a while. But is that any more staggering than imagining a task that takes a mere ten thousand centuries to finish?
Mathematics is, to an extent, about finding interesting true statements. What makes something interesting? That depends on the person surprised, certainly. A good guideline is probably “something not obvious before you’ve heard it, thatlooks inevitable after you have”. That is, a surprise. Learning mathematics probably has to be steadily surprising, and that’s good, because this kind of surprise is fun.
If it’s always a surprise there might be trouble. If you’re doing similar kinds of problems you should start to see them as pretty similar, and have a fair idea what the answers should be. So, from what Toby has said so far … I wouldn’t call him stupid. At most, just inexperienced.
Eric the Circle for the 19th, by Janka, is the Venn Diagram joke for the week. Properly any Venn Diagram with two properties has an overlap like this. We’re supposed to place items in both circles, and in the intersection, to reflect how much overlap there is. Using the sizes of each circle to reflect the sizes of both sets, and the size of the overlap to represent the size of the intersection, is probably inevitable. The shorthand calls on our geometric intuition to convey information, anyway.
Tony Murphy’s It’s All About You for the 19th has a bunch of things going on. The punch line calls “algebra” what’s really a statistics problem, calculating the arithmetic mean of four results. The work done is basic arithmetic. But making work seem like a more onerous task is a good bit of comic exaggeration, and algebra connotes something harder than arithmetic. But Murphy exaggerates with restraint: the characters don’t rate this as calculus.
Then there’s what they’re doing at all. Given four clocks, what’s the correct time? The couple tries averaging them. Why should anyone expect that to work?
There’s reason to suppose this might work. We can suppose all the clocks are close to the correct time. If they weren’t, they would get re-set, or not looked at anymore. A clock is probably more likely to be a little wrong than a lot wrong. You’d let a clock that was two minutes off go about its business, in a way you wouldn’t let a clock that was three hours and 42 minutes off. A clock is probably as likely to show a time two minutes too early as it is two minutes too late. This all suggests that the clock errors are normally distributed, or something like that. So the error of the arithmetic mean of a bunch of clock measurements we can expect to be zero. Or close to zero, anyway.
There’s reasons this might not work. For example, a clock might systematically run late. My mantle clock, for example, usually drifts about a minute slow over the course of the week it takes to wind. Or the clock might be deliberately set wrong: it’s not unusual to set an alarm clock to five or ten or fifteen minutes ahead of the true time, to encourage people to think it’s later than it really is and they should hurry up. Similarly with watches, if their times aren’t set by Internet-connected device. I don’t know whether it’s possible to set a smart watch to be deliberately five minutes fast, or something like that. I’d imagine it should be possible, but also that the people programming watches don’t see why someone might want to set their clock to the wrong time. From January to March 2018, famously, an electrical grid conflict caused certain European clocks to lose around six minutes. The reasons for this are complicated and technical, and anyway The Doctor sorted it out. But that sort of systematic problem, causing all the clocks to be wrong in the same way, will foil this take-the-average scheme.
Murphy’s not thinking of that, not least because this comic’s a rerun from 2009. He was making a joke, going for the funnier-sounding “it’s 8:03 and five-eights” instead of the time implied by the average, 8:04 and a half. That’s all right. It’s a comic strip. Being amusing is what counts.
Some weeks there’s an obvious theme. Most weeks there’s not. But mid-March has formed a traditional theme for at least one day. I’m going to excerpt that from the rest of the week’s comics, because I’ve noticed what readership around here is like for stuff tagged “Pi Day” in mid-March. You all can do what you like with your pop-mathematics blogs.
Pi Day seems to have brought out fewer comics than in years past. The ones that were made, among the set I read, were also less on point. There was a lot of actual physical pie involved, too, suggesting the day might be escaping the realm of pop-mathematics silliness straight into pun nobody thinks about. Or maybe cartoonists just didn’t have a fresh angle this year.
John Hambrock’s The Brilliant Mind of Edison Lee shows off a nerd kind of mistake. At least one I think of as particularly nerdy. Wanting to calculate is a natural urge, especially for those who do it well. But to calculate the circumference of a pie from its diameter? What is exciting about that? More, does Grandpa recognize what a circumference is? It’s relatively easy to see the diameter of a pie. Area, also. But circumference? I’m not sure people are good at estimating the circumference of things, not by sight. You’d need a tape measure, or a similar flexible ruler, to start with and we don’t see that. Without the chance to measure it himself, Grandpa has to take the circumference (and, for that matter, diameter) at Edison Lee’s word. What would convince Grandpa of anything?
For example, even if Grandpa accepted that Edison Lee had multiplied one number by 3.14 and gotten another number he might ask: how do we know pi is the same for pies of all sizes? Could a small pie’s circumference be only three times the diameter’s length, while a large pie’s is four times that? Could Edison offer an answer for why 3.14, or some nearby number, is all that interesting?
Liz Climo’s Cartoons is an example of the second kind of strip I mentioned during my introductory paragraphs. While it’s nominally built on Pi Day, any mathematics is gone. It’s just about the pun. And, well, the fun of having a capybara around.
Mark Parisi’s Off The Mark is the most on-topic strip for the day. And the anthropomorphic numerals joke for the day, too. It’s built on there being infinitely many digits to π, which, true enough. There are also infinitely many digits to , mind; they’re just not so interesting a set. π being irrational gives us a never-ending variety of digits. It’s almost certainly normal, too. Any finite string of digits most likely appears infinitely often in this string.
We won’t ever know enough digits of π to depict all of them. But we can depict the digits we know, and many different ways. Here’s a 2015 Washington Post article with several pictures representing the digits, including some neat “random walk” ones. In those the digits are used to represent directions and distances for a thing to move, and it represents the number as this curious wispy structure. There’s amazing pictures to be made of this.
And last, a comic strip that I don’t think was trying to set up a Pi Day joke. But Bill Schorr’s The Grizzwells for the 13th is a routine story problem joke. But that the setup mentions pies? If this ran on the 14th I would feel confident Schorr was going for a Pi Day comic. But it didn’t, so I don’t know if Schorr was going for that or not.
This installment took longer to write than you’d figure, because it’s the time of year we’re watching a lot of mostly Rankin/Bass Christmas specials around here. So I have to squeeze words out in-between baffling moments of animation and, like, arguing whether there’s any possibility that Jack Frost was not meant to be a Groundhog Day special that got rewritten to Christmas because the networks weren’t having it otherwise.
Jeffrey Caulfield and Brian Ponshock’s Yaffle for the 3rd is the anthropomorphic numerals joke for the week. … You know, I’ve always wondered in this sort of setting, what are two-digit numbers like? I mean, what’s the difference between a twelve and a one-and-two just standing near one another? How do people recognize a solitary number? This is a darned silly thing to wonder so there’s probably a good web comic about it.
John Hambrock’s The Brilliant Mind of Edison Lee for the 4th has Edison forecast the outcome of a basketball game. I can’t imagine anyone really believing in forecasting the outcome, though. The elements of forecasting a sporting event are plausible enough. We can suppose a game to be a string of events. Each of them has possible outcomes. Some of them score points. Some block the other team’s score. Some cause control of the ball (or whatever makes scoring possible) to change teams. Some take a player out, for a while or for the rest of the game. So it’s possible to run through a simulated game. If you know well enough how the people playing do various things? How they’re likely to respond to different states of things? You could certainly simulate that.
But all sorts of crazy things will happen, one game or another. Run the same simulation again, with different random numbers. The final score will likely be different. The course of action certainly will. Run the same simulation many times over. Vary it a little; what happens if the best player is a little worse than average? A little better? What if the referees make a lot of mistakes? What if the weather affects the outcome? What if the weather is a little different? So each possible outcome of the sporting event has some chance. We have a distribution of the possible results. We can judge an expected value, and what the range of likely outcomes is. This demands a lot of data about the players, though. Edison Lee can have it, I suppose. The premise of the strip is that he’s a genius of unlimited competence. It would be more likely to expect for college and professional teams.
Brian Basset’s Red and Rover for the 4th uses arithmetic as the homework to get torn up. I’m not sure it’s just a cameo appearance. It makes a difference to the joke as told that there’s division and long division, after all. But it could really be any subject.
Last week Comic Strip Master Command sent out just enough on-theme comics for two essays, the way I do them these days. The first half has some multiplication in two of the strips. So that’s enough to count as a theme for me.
Aaron Neathery’s Endtown for the 26th depicts a dreary, boring school day by using arithmetic. A lot of times tables. There is some credible in-universe reason to be drilling on multiplication like this. The setting is one where the characters can’t expect to have computers available. That granted, I’m not sure there’s a point to going up to memorizing four times 27. Going up to twelve-times seems like enough for common uses. For multiplying two- and longer-digit numbers together we usually break the problem up into a string of single-digit multiplications.
There are a handful of bigger multiplications that can make your life easier to know, like how four times 25 is 100. Or three times 33 is pretty near 100. But otherwise? … Of course, the story needs the class to do something dull and seemingly pointless. Going deep into multiplication tables communicates that to the reader quickly.
Thaves’s Frank and Ernest for the 26th is a spot of wordplay. Also a shout-out to my friends who record mathematics videos for YouTube. It is built on the conflation between the ideas of something multiplying and the amount of something growing. It’s easy to see where the idea comes from; just keep hitting ‘x 2’ on a calculator and the numbers grow excitingly fast. You get even more exciting results with ‘x 3’ or ‘x π’. But multiplying by 1 is still multiplication. As is multiplying by a number smaller than 1. Including negative numbers. That doesn’t hurt the joke any. That multiplying two things together doesn’t necessarily give you something larger is a consideration when you’re thinking rigorously about what multiplication can do. It doesn’t have to be part of normal speech.
Nate Frakes’s Break of Day for the 27th is the anthropomorphic numerals joke for the week. I don’t know that there’s anything in the other numerals being odds rather than evens, or a mixture of odds and evens. It might just be that they needed to be anything but 1.
So I’m going to have a third Reading the Comics essay for last week’s strips. This happens sometimes. Two of the four strips for this essay mention percentages. But one of the others is so important to me that it gets naming rights for the essay. You’ll understand when I’m done. I hope.
Angie Bailey’s Texts From Mittens for the 2nd talks about percentages. That’s a corner of arithmetic that many people find frightening and unwelcoming. I’m tickled that Mittens doesn’t understand how easy it is to work out a percentage of 100. It’s a good, reasonable bit of characterization for a cat.
John Graziano’s Ripley’s Believe It Or Not for the 2nd is about a subject close to my heart. At least a third of it is. The mention of negative Kelvin temperatures set off a … heated … debate on the comments thread at GoComics.com. Quite a few people remember learning in school that the Kelvin temperature scale. It starts with the coldest possible temperature, which is zero. And that’s that. They have taken this to denounce Graziano as writing obvious nonsense. Well.
Something you should know about anything you learned in school: the reality is more complicated than that. This is true for thermodynamics. This is true for mathematics. This is true for anything interesting enough for humans to study. This also applies to stuff you learned as an undergraduate. Also to grad school.
So what are negative temperatures? At least on an absolute temperature scale, where the answer isn’t an obvious and boring “cold”? One clue is in the word “absolute” there. It means a way of measuring temperature that’s in some way independent of how we do the measurement. In ordinary life we measure temperatures with physical phenomena. Fluids that expand or contract as their temperature changes. Metals that expand or contract as their temperatures change. For special cases like blast furnaces, sample slugs of clays that harden or don’t at temperature. Observing the radiation of light off a thing. And these are all fine, useful in their domains. They’re also bound in particular physical experiments, though. Is there a definition of temperature that … you know … we can do mathematically?
Of course, or I wouldn’t be writing this. There are two mathematical-physics components to give us temperature. One is the internal energy of your system. This is the energy of whatever your thing is, less the gravitational or potential energy that reflects where it happens to be sitting. Also minus the kinetic energy that comes of the whole system moving in whatever way you like. That is, the energy you’d see if that thing were in an otherwise empty universe. The second part is — OK, this will confuse people. It’s the entropy. Which is not a word for “stuff gets broken”. Not in this context. The entropy of a system describes how many distinct ways there are for a system to arrange its energy. Low-entropy systems have only a few ways to put things. High-entropy systems have a lot of ways to put things. This does harmonize with the pop-culture idea of entropy. There are many ways for a room to be messy. There are few ways for it to be clean. And it’s so easy to make a room messier and hard to make it tidier. We say entropy tends to increase.
So. A mathematical physicist bases “temperature” on the internal energy and the entropy. Imagine giving a system a tiny bit more energy. How many more ways would the system be able to arrange itself with that extra energy? That gives us the temperature. (To be precise, it gives us the reciprocal of the temperature. We could set this up as how a small change in entropy affects the internal energy, and get temperature right away. But I have an easier time thinking of going from change-in-energy to change-in-entropy than the other way around. And this is my blog so I get to choose how I set things up.)
This definition sounds bizarre. But it works brilliantly. It’s all nice clean mathematics. It matches perfectly nice easy-to-work-out cases, too. Like, you may kind of remember from high school physics how the temperature of a gas is something something average kinetic energy something. Work out the entropy and the internal energy of an ideal gas. Guess what this change-in-entropy/change-in-internal-energy thing gives you? Exactly something something average kinetic energy something. It’s brilliant.
In ordinary stuff, adding a little more internal energy to a system opens up new ways to arrange that energy. It always increases the entropy. So the absolute temperature, from this definition, is always positive. Good stuff. Matches our intuition well.
So in 1956 Dr Norman Ramsey and Dr Martin Klein published some interesting papers in the Physical Review. (Here’s a link to Ramsey’s paper and here’s Klein’s, if you can get someone else to pay for your access.) Their insightful question: what happens if a physical system has a maximum internal energy? If there’s some way of arranging the things in your system so that no more energy can come in? What if you’re close to but not at that maximum?
It depends on details, yes. But consider this setup: there’s one, or only a handful, of ways to arrange the maximum possible internal energy. There’s some more ways to arrange nearly-the-maximum-possible internal energy. There’s even more ways to arrange not-quite-nearly-the-maximum-possible internal energy.
Look at what that implies, though. If you’re near the maximum-possible internal energy, then adding a tiny bit of energy reduces the entropy. There’s fewer ways to arrange that greater bit of energy. Greater internal energy, reduced entropy. This implies the temperature is negative.
So we have to allow the idea of negative temperatures. Or we have to throw out this statistical-mechanics-based definition of temperature. And the definition works so well otherwise. Nobody’s got an idea nearly as good for it. So mathematical physicists shrugged, and noted this as a possibility, but mostly ignored it for decades. If it got mentioned, it was because the instructor was showing off a neat weird thing. This is how I encountered it, as a young physics major full of confidence and not at all good on wedge products. But it was sitting right there, in my textbook, Kittel and Kroemer’s Thermal Physics. Appendix E, four brisk pages before the index. Still, it was an enchanting piece.
And a useful one, possibly the most useful four-page aside I encountered as an undergraduate. My thesis research simulated a fluid-equilibrium problem run at different temperatures. There was a natural way that this fluid would have a maximum possible internal energy. So, a good part — the most fascinating part — of my research was in the world of negative temperatures. It’s a strange one, one where entropy seems to work in reverse. Things build, spontaneously. More heat, more energy, makes them build faster. In simulation, a shell of viscosity-free gas turned into what looked for all the world like a solid shell.
All right, but you can simulate anything on a computer, or in equations, as I did. Would this ever happen in reality? … And yes, in some ways. Internal energy and entropy are ideas that have natural, irresistible fits in information theory. This is the study of … information. I mean, how you send a signal and how you receive a signal. It turns out a lot of laser physics has, in information theory terms, behavior that’s negative-temperature. And, all right, but that’s not what anybody thinks of as temperature.
Well, these ideas happen still. They usually need some kind of special constraint on the things. Atoms held in a magnetic field so that their motions are constrained. Vortices locked into place on a two-dimensional surface (a prerequisite to my little fluids problems). Atoms bound into a lattice that keeps them from being able to fly free. All weird stuff, yes. But all exactly as the statistical-mechanics temperature idea calls on.
And notice. These negative temperatures happen only when the energy is extremely high. This is the grounds for saying that they’re hotter than positive temperatures. And good reason, too. Getting into what heat is, as opposed to temperature, is an even longer discussion. But it seems fair to say something with a huge internal energy has more heat than something with slight internal energy. So Graziano’s Ripley’s claim is right.
(GoComics.com commenters, struggling valiantly, have tried to talk about quantum mechanics stuff and made a hash of it. As a general rule, skip any pop-physics explanation of something being quantum mechanics.)
If you’re interested in more about this, I recommend Stephen J Blundell and Katherine M Blundell’s Concepts in Thermal Physics. Even if you’re not comfortable enough in calculus to follow the derivations, the textbook prose is insightful.
John Hambrock’s The Brilliant Mind of Edison Lee for the 3rd is a probability joke. And it’s built on how impossible putting together a particular huge complicated structure can be. I admit I’m not sure how I’d go about calculating the chance of a heap of Legos producing a giraffe shape. Imagine working out the number of ways Legos might fall together. Imagine working out how many of those could be called giraffe shapes. It seems too great a workload. And figuring it by experiment, shuffling Legos until a giraffe pops out, doesn’t seem much better.
This approaches an argument sometimes raised about the origins of life. Grant there’s no chance that a pile of Legos could be dropped together to make a giraffe shape. How can the much bigger pile of chemical elements have been stirred together to make an actual giraffe? Or, the same problem in another guise. If a monkey could go at a typewriter forever without typing any of Shakespeare’s plays, how did a chain of monkeys get to writing all of them?
And there’s a couple of explanations. At least partial explanations. There is much we don’t understand about the origins of life. But one is that the universe is huge. There’s lots of stars. It looks like most stars have planets. There’s lots of chances for chemicals to mix together and form a biochemistry. Even an impossibly unlikely thing will happen, given enough chances.
And another part is selection. A pile of Legos thrown into a pile can do pretty much anything. Any piece will fit into any other piece in a variety of ways. A pile of chemicals are more constrained in what they can do. Hydrogen, oxygen, and a bit of activation energy can make hydrogen-plus-hydroxide ions, water, or hydrogen peroxide, and that’s it. There can be a lot of ways to arrange things. Proteins are chains of amino acids. These chains can be about as long as you like. (It seems.) (I suppose there must be some limit.) And they curl over and fold up in some of the most complicated mathematical problems anyone can even imagine doing. How hard is it to find a set of chemicals that are a biochemistry? … That’s hard to say. There are about twenty amino acids used for proteins in our life. It seems like there could be a plausible life with eighteen amino acids, or 24, including a couple we don’t use here. It seems plausible, though, that my father could have had two brothers growing up; if there were, would I exist?
Jason Chatfield’s Ginger Meggs for the 3rd is a story-problem joke. Familiar old form to one. The question seems to be a bit mangled in the asking, though. Thirty percent of Jonson’s twelve apples is a nasty fractional number of apples. Surely the question should have given Jonson ten and Fitzclown twelve apples. Then thirty percent of Jonson’s apples would be a nice whole number.
It wasn’t just another busy week from Comic Strip Master Command. And a week busy enough for me to split the mathematics comics into two essays. It was one where I recognized one of the panels as one I’d featured before. Multiple times. Some of the comics I feature are in perpetual reruns and don’t have your classic, deep, Peanuts-style decades of archives to draw from. I don’t usually go checking my archives to see if I’ve mentioned a comic before, not unless something about it stands out. So for me to notice I’ve seen this strip repeatedly can mean only one thing: there was something a little bit annoying about it. Recognize it yet? You will.
Hy Eisman’s Popeye for the 7th of January, 2018 is an odd place for mathematics to come in. J Wellington Wimpy regales Popeye with all the intellectual topics he tried to impress his first love with, and “Euclidean postulates in the original Greek” made the cut. And, fair enough. Euclid’s books are that rare thing that’s of important mathematics (or scientific) merit and that a lay person can just pick up and read, even for pleasure. These days we’re more likely to see a division between mathematics writing that’s accessible but unimportant (you know, like, me) or that’s important but takes years of training to understand. Doing it in the original Greek is some arrogant showing-off, though. Can’t blame Carolyn for bailing on someone pulling that stunt.
John Hambrock’s The Brilliant Mind of Edison Lee for the 8th is set in mathematics class. And Edison tries to use a pile of mathematically-tinged words to explain why it’s okay to read a Star Wars book instead of paying attention. Or at least to provide a response the teacher won’t answer. Maybe we can make something out of this by allowing the monetary value of something to be related to its relevance. But if we allow that then Edison’s messed up. I don’t know what quantity is measured by multiplying “every Star Wars book ever written” by “all the movies and merchandise”. But dividing that by the value of the franchise gets … some modest number in peculiar units divided by a large number of dollars. The number value is going to be small. And the dimensions are obviously crazy. Edison needs to pay better attention to the mathematics.
Johnny Hart’s B.C. for the 14th of July, 1960 shows off the famous equation of the 20th century. All part of the comic’s anachronism-comedy chic. The strip reran the 9th of January. “E = mc2” is, correctly, associated with Albert Einstein and some of his important publications of 1905. But the expression does have some curious precursors, people who had worked out the relationship (or something close to it) before Einstein and who didn’t quite know what they had. A short piece from Scientific American a couple years back describes pre-Einstein expressions of the equation from Oliver Heaviside, Henri Poincaré, and Fritz Hasenöhrl. I’m not surprised Poincaré had something close to this; it seems like he spent twenty years almost discovering Relativity. That’s all right; he did enough in dynamical systems that mathematicians aren’t going to forget him.
Jason Chatfield’s Ginger Meggs for the 9th draws my eye just because the blackboard lists “Prime Numbers”. Fair enough place setting, although what’s listed are 1, 3, 5, and 7. These days mathematicians don’t tend to list 1 as a prime number; it’s inconvenient. (A lot of proofs depend on their being exactly one way to factorize a number. But you can always multiply a number by ‘1’ a couple more times without changing its value. So ‘6’ is 3 times 2, but it’s also 3 times 2 times 1, or 3 times 2 times 1 times 1, or 3 times 2 times 1145,388,434,247. You can write around that, but it’s easier to define ‘1’ as not a prime.) But it could be defended. I can’t think any reason to leave ‘2’ off a list of prime numbers, though. I think Chatfield conflated odd and prime numbers. If he’d had a bit more blackboard space we could’ve seen whether the next item was 9 or 11 and that would prove the matter.
Paul Trap’s Thatababy for the 9th uses arithmetic — square roots — as the kind of thing to test whether a computer’s working. Everyone has their little tests like this. My love’s father likes to test whether the computer knows of the band Walk The Moon or of Christine Korsgaard (a prominent philosopher in my love’s specialty). I’ve got a couple words I like to check dictionaries for. Of course the test is only any good if you know what the answer should be, and what’s the actual square root of 3,278? Goodness knows. It’s got to be between 50 (50 squared is 25 hundred) and 60 (60 squared is 36 hundred). Since 3,278 is so much closer 3,600 than 2,500 its square root should be closer to 60 than to 50. So 57-point-something is plausible. Unfortunately square roots don’t lend themselves to the same sorts of tricks from reading the last digit that cube roots do. And 3,278 isn’t a perfect square anyway. Alexa is right on this one. Also about the specific gravity of cobalt, at least if Wikipedia is right and not conspiring with the artificial intelligences on this one. Catch you in 2021.
I have my reasons for this installment’s title. They involve my deductions from a comic strip. Give me a few paragraphs.
Mark Anderson’s Andertoons for the 16th asks for attention from whatever optician-written blog reads the comics for the eye jokes. And meets both the Venn Diagram and the Mark Anderson’s Andertoons content requirements for this week. Good job! Starts the week off strong.
Lincoln Pierce’s Big Nate: First Class for the 16th, rerunning the strip from 1993, is about impossibly low-probability events. We can read the comic as a joke about extrapolating a sequence from a couple examples. Properly speaking we can’t; any couple of terms can be extended in absolutely any way. But we often suppose a sequence follows some simple pattern, as many real-world things do. I’m going to pretend we can read Jenny’s estimates of the chance she’ll go out with him as at all meaningful. If Jenny’s estimate of the chance she’d go out with Nate rose from one in a trillion to one in a billion over the course of a week, this could be a good thing. If she’s a thousand times more likely each week to date him — if her interest is rising geometrically — this suggests good things for Nate’s ego in three weeks. If she’s only getting 999 trillionths more likely each week — if her interest is rising arithmetically — then Nate has a touch longer to wait before a date becomes likely.
(I forget whether she has agreed to a date in the 24 years since this strip first appeared. He has had some dates with kids in his class, anyway, and some from the next grade too.)
Is the point of the first problem, Farmer Joe’s apples, to see whether a student can do a not-quite-long division? Or is it to see whether the student can extract a price-per-quantity for something, and apply that to find the quantity to fit a given price? If it’s the latter then the numbers don’t make a difference. One would want to avoid marking down a student who knows what to do, and could divide 15 cents by three, but would freeze up if a more plausible price of, say, $2.25 per pound had to be divided by three.
But then the second problem, Mr Schad driving from Belmont to Cadillac, got me wondering. It is about 84 miles between the two Michigan cities (and there is a Reed City along the way). The time it takes to get from one city to another is a fair enough problem. But these numbers don’t make sense. At 55 miles per hour the trip takes an awful 1.5273 hours. Who asks elementary school kids to divide 84 by 55? On purpose? But at the state highway speed limit (for cars) of 70 miles per hour, the travel time is 1.2 hours. 84 divided by 70 is a quite reasonable thing to ask elementary school kids to do.
And then I thought of this: you could say Belmont and Cadillac are about 88 miles apart. Google Maps puts the distance as 86.8 miles, along US 131; but there’s surely some point in the one town that’s exactly 88 miles from some point in the other, just as there’s surely some point exactly 84 miles from some point in the other town. 88 divided by 55 would be another reasonable problem for an elementary school student; 1.6 hours is a reasonable answer. The (let’s call it) 1980s version of the question ought to see the car travel 88 miles at 55 miles per hour. The contemporary version ought to see the car travel 84 miles at 70 miles per hour. No reasonable version would make it 84 miles at 55 miles per hour.
So did Mallett take a story problem that could actually have been on an era-appropriate test and ancient it up?
Before anyone reports me to Comic Strip Master Command let me clarify what I’m wondering about. I don’t care if the details of the joke don’t make perfect sense. They’re jokes, not instruction. All the story problem needs to set up the joke is the obsolete speed limit; everything else is fluff. And I enjoyed working out variation of the problem that did make sense, so I’m happy Mallett gave me that to ponder.
Here’s what I do wonder about. I’m curious if story problems are getting an unfair reputation. I’m not an elementary school teacher, or parent of a kid in school. I would like to know what the story problems look like. Do you, the reader, have recent experience with the stuff farmers, drivers, and people weighing things are doing in these little stories? Are they measuring things that people would plausibly care about today, and using values that make sense for the present day? I’d like to know what the state of story problems is.
John Hambrock’s The Brilliant Mind of Edison Lee for the 18th uses mental arithmetic as the gauge of intelligence. Pretty harsly, too. I wouldn’t have known the square root of 8649 off the top of my head either, although it’s easy to tell that 92 can’t be right: the last digit of 92 squared has to be 4. It’s also easy to tell that 92 has to be about right, though, as 90 times 90 will be about 8100. Given this information, if you knew that 8,649 was a perfect square, you’d be hard-pressed to think of a better guess for its value than 93. But since most whole numbers are not perfect squares, “a little over 90” is the best I’d expect to do.