Reading the Comics, March 6, 2017: Blackboards Edition

I can’t say there’s a compelling theme to the first five mathematically-themed comics of last week. Screens full of mathematics turned up in a couple of them, so I’ll run with that. There were also just enough strips that I’m splitting the week again. It seems fair to me and gives me something to remember Wednesday night that I have to rush to complete.

Jimmy Hatlo’s Little Iodine for the 1st of January, 1956 was rerun on the 5th of March. The setup demands Little Iodine pester her father for help with the “hard homework” and of course it’s arithmetic that gets to play hard work. It’s a word problem in terms of who has how many apples, as you might figure. Don’t worry about Iodine’s boss getting fired; Little Iodine gets her father fired every week. It’s their schtick.

Jimmy Hatlo’s Little Iodine for the 1st of January, 1956. I guess class started right back up the 2nd, but it would’ve avoided so much trouble if she’d done her homework sometime during the winter break. That said, I never did.

Dana Simpson’s Phoebe and her Unicorn for the 5th mentions the “most remarkable of unicorn confections”, a sugar dodecahedron. Dodecahedrons have long captured human imaginations, as one of the Platonic Solids. The Platonic Solids are one of the ways we can make a solid-geometry analogue to a regular polygon. Phoebe’s other mentioned shape of cubes is another of the Platonic Solids, but that one’s common enough to encourage no sense of mystery or wonder. The cube’s the only one of the Platonic Solids that will fill space, though, that you can put into stacks that don’t leave gaps between them. Sugar cubes, Wikipedia tells me, have been made only since the 19th century; the Moravian sugar factory director Jakub Kryštof Rad got a patent for cutting block sugar into uniform pieces in 1843. I can’t dispute the fun of “dodecahedron” as a word to say. Many solid-geometric shapes have names that are merely descriptive, but which are rendered with Greek or Latin syllables so as to sound magical.

Bud Grace’s Piranha Club for the 6th started a sequence in which the Future Disgraced Former President needs the most brilliant person in the world, Bud Grace. A word balloon full of mathematics is used as symbol for this genius. I feel compelled to point out Bud Grace was a physics major. But while Grace could as easily have used something from the physics department to show his deep thinking abilities, that would all but certainly have been rendered as equation and graphs, the stuff of mathematics again.

Bud Grace’s Piranha Club for the 6th of March, 2017. 241 times 635 is 153,035 by the way. I wouldn’t work that out in my head if I needed the number. I might work out an estimate of how big it was, in which case I’d do this: 241 is about 250, which is one-quarter of a thousand. One-quarter of 635 is something like 150, which times a thousand is 150,000. If I needed it exactly I’d get a calculator. Unless I just needed something to occupy my mind without having any particular emotional charge.

Scott Meyer’s Basic Instructions rerun for the 6th is aptly titled, “How To Unify Newtonian Physics And Quantum Mechanics”. Meyer’s advice is not bad, really, although generic enough it applies to any attempts to reconcile two different models of a phenomenon. Also there’s not particularly a problem reconciling Newtonian physics with quantum mechanics. It’s general relativity and quantum mechanics that are so hard to reconcile.

Still, Basic Instructions is about how you can do a thing, or learn to do a thing. It’s not about how to allow anything to be done for the first time. And it’s true that, per quantum mechanics, we can’t predict exactly what any one particle will do at any time. We can say what possible things it might do and how relatively probable they are. But big stuff, the stuff for which Newtonian physics is relevant, involve so many particles that the unpredictability becomes too small to notice. We can see this as the Law of Large Numbers. That’s the probability rule that tells us we can’t predict any coin flip, but we know that a million fair tosses of a coin will not turn up 800,000 tails. There’s more to it than that (there’s always more to it), but that’s a starting point.

Michael Fry’s Committed rerun for the 6th features Albert Einstein as the icon of genius. Natural enough. And it reinforces this with the blackboard full of mathematics. I’m not sure if that blackboard note of “E = md3” is supposed to be a reference to the famous Far Side panel of Einstein hearing the maid talk about everything being squared away. I’ll take it as such.

Reading the Comics, March 4, 2017: Frazz, Christmas Trees, and Weddings Edition

It was another of those curious weeks when Comic Strip Master Command didn’t send quite enough comics my way. Among those they did send were a couple of strips in pairs. I can work with that.

Samson’s Dark Side Of The Horse for the 26th is the Roman Numerals joke for this essay. I apologize to Horace for being so late in writing about Roman Numerals but I did have to wait for Cecil Adams to publish first.

In Jef Mallett’s Frazz for the 26th Caulfield ponders what we know about Pythagoras. It’s hard to say much about the historical figure: he built a cult that sounds outright daft around himself. But it’s hard to say how much of their craziness was actually their craziness, how much was just that any ancient society had a lot of what seems nutty to us, and how much was jokes (or deliberate slander) directed against some weirdos. What does seem certain is that Pythagoras’s followers attributed many of their discoveries to him. And what’s certain is that the Pythagorean Theorem was known, at least a thing that could be used to measure things, long before Pythagoras was on the scene. I’m not sure if it was proved as a theorem or whether it was just known that making triangles with the right relative lengths meant you had a right triangle.

Greg Evans’s Luann Againn for the 28th of February — reprinting the strip from the same day in 1989 — uses a bit of arithmetic as generic homework. It’s an interesting change of pace that the mathematics homework is what keeps one from sleep. I don’t blame Luann or Puddles for not being very interested in this, though. Those sorts of complicated-fraction-manipulation problems, at least when I was in middle school, were always slogs of shuffling stuff around. They rarely got to anything we’d like to know.

Jef Mallett’s Frazz for the 1st of March is one of those little revelations that statistics can give one. Myself, I was always haunted by the line in Carl Sagan’s Cosmos about how, in the future, with the Sun ageing and (presumably) swelling in size and heat, the Earth would see one last perfect day. That there would most likely be quite fine days after that didn’t matter, and that different people might disagree on what made a day perfect didn’t matter. Setting out the idea of a “perfect day” and realizing there would someday be a last gave me chills. It still does.

Richard Thompson’s Poor Richard’s Almanac for the 1st and the 2nd of March have appeared here before. But I like the strip so I’ll reuse them too. They’re from the strip’s guide to types of Christmas trees. The Cubist Fur is described as “so asymmetrical it no longer inhabits Euclidean space”. Properly neither do we, but we can’t tell by eye the difference between our space and a Euclidean space. “Non-Euclidean” has picked up connotations of being so bizarre or even horrifying that we can’t hope to understand it. In practice, it means we have to go a little slower and think about, like, what would it look like if we drew a triangle on a ball instead of a sheet of paper. The Platonic Fir, in the 2nd of March strip, looks like a geometry diagram and I doubt that’s coincidental. It’s very hard to avoid thoughts of Platonic Ideals when one does any mathematics with a diagram. We know our drawings aren’t very good triangles or squares or circles especially. And three-dimensional shapes are worse, as see every ellipsoid ever done on a chalkboard. But we know what we mean by them. And then we can get into a good argument about what we mean by saying “this mathematical construct exists”.

Mark Litzler’s Joe Vanilla for the 3rd uses a chalkboard full of mathematics to represent the deep thinking behind a silly little thing. I can’t make any of the symbols out to mean anything specific, but I do like the way it looks. It’s quite well-done in looking like the shorthand that, especially, physicists would use while roughing out a problem. That there are subscripts with forms like “12” and “22” with a bar over them reinforces that. I would, knowing nothing else, expect this to represent some interaction between particles 1 and 2, and 2 with itself, and that the bar means some kind of complement. This doesn’t mean much to me, but with luck, it means enough to the scientist working it out that it could be turned into a coherent paper.

Bill Holbrook’s On The Fastrack for the 3rd of March, 2017. Fi’s dress isn’t one of those … kinds with the complicated pattern of holes in it. She got it torn while trying to escape the wedding and falling into the basement.

Bill Holbrook’s On The Fastrack is this week about the wedding of the accounting-minded Fi. And she’s having last-minute doubts, which is why the strip of the 3rd brings in irrational and anthropomorphized numerals. π gets called in to serve as emblematic of the irrational numbers. Can’t fault that. I think the only more famously irrational number is the square root of two, and π anthropomorphizes more easily. Well, you can draw an established character’s face onto π. The square root of 2 is, necessarily, at least two disconnected symbols and you don’t want to raise distracting questions about whether the root sign or the 2 gets the face.

That said, it’s a lot easier to prove that the square root of 2 is irrational. Even the Pythagoreans knew it, and a bright child can follow the proof. A really bright child could create a proof of it. To prove that π is irrational is not at all easy; it took mathematicians until the 19th century. And the best proof I know of the fact does it by a roundabout method. We prove that if a number (other than zero) is rational then the tangent of that number must be irrational, and vice-versa. And the tangent of π/4 is 1, so therefore π/4 must be irrational, so therefore π must be irrational. I know you’ll all trust me on that argument, but I wouldn’t want to sell it to a bright child.

Bill Holbrook’s On The Fastrack for the 4th of March, 2017. I feel bad that I completely forgot Carl had a kid and that the face on the x doesn’t help me remember anything.

Holbrook continues the thread on the 4th, extends the anthropomorphic-mathematics-stuff to call people variables. There’s ways that this is fair. We use a variable for a number whose value we don’t know or don’t care about. A “random variable” is one that could take on any of a set of values. We don’t know which one it does, in any particular case. But we do know — or we can find out — how likely each of the possible values is. We can use this to understand the behavior of systems even if we never actually know what any one of it does. You see how I’m going to defend this metaphor, then, especially if we allow that what people are likely or unlikely to do will depend on context and evolve in time.

Reading the Comics, February 2, 2017: I Haven’t Got A Jumble Replacement Source Yet

If there was one major theme for this week it was my confidence that there must be another source of Jumble strips out there. I haven’t found it, but I admit not making it a priority either. The official Jumble site says I can play if I activate Flash, but I don’t have enough days in the year to keep up with Flash updates. And that doesn’t help me posting mathematics-relevant puzzles here anyway.

Mark Anderson’s Andertoons for January 29th satisfies my Andertoons need for this week. And it name-drops the one bit of geometry everyone remembers. To be dour and humorless about it, though, I don’t think one could likely apply the Pythagorean Theorem. Typically the horizontal axis and the vertical axis in a graph like this measure different things. Squaring the different kinds of quantities and adding them together wouldn’t mean anything intelligible. What would even be the square root of (say) a squared-dollars-plus-squared-weeks? This is something one learns from dimensional analysis, a corner of mathematics I’ve thought about writing about some. I admit this particular insight isn’t deep, but everything starts somewhere.

Norm Feuti’s Gil rerun for the 30th is a geometry name-drop, listing it as the sort of category Jeopardy! features. Gil shouldn’t quit so soon. The responses for the category are “What is the Pythagorean Theorem?”, “What is acute?”, “What is parallel?”, “What is 180 degrees?” (or, possibly, 360 or 90 degrees), and “What is a pentagon?”.

Terri Libenson’s Pajama Diaries for the 1st of February, 2017. You know even for a fundraising event \$17.50 seems a bit much for a hot dog and bottled water. Maybe the friend’s 8-year-old child is way off too.

Terri Libenson’s Pajama Diaries for the 1st of February shows off the other major theme of this past week, which was busy enough that I have to again split the comics post into two pieces. That theme is people getting basic mathematics wrong. Mostly counting. (You’ll see.) I know there’s no controlling what people feel embarrassed about. But I think it’s unfair to conclude you “can no longer” do mathematics in your head because you’re not able to make change right away. It’s normal to be slow or unreliable about something you don’t do often. Inexperience and inability are not the same thing, and it’s unfair to people to conflate them.

Gordon Bess’s Redeye for the 21st of September, 1970, got rerun the 1st of February. And it’s another in the theme of people getting basic mathematics wrong. And even more basic mathematics this time. There’s more problems-with-counting comics coming when I finish the comics from the past week.

Gordon Bess’s Redeye for the 21st of September, 1970. Rerun the 1st of February, 2017. I don’t see why they’re so worried about counting bullets if being shot just leaves you a little discombobulated.

Dave Whamond’s Reality Check for the 1st hopes that you won’t notice the label on the door is painted backwards. Just saying. It’s an easy joke to make about algebra, also, that it should put letters in to perfectly good mathematics. Letters are used for good reasons, though. We’ve always wanted to work out the value of numbers we only know descriptions of. But it’s way too wordy to use the whole description of the number every time we might speak of it. Before we started using letters we could use placeholder names like “re”, meaning “thing” (as in “thing we want to calculate”). That works fine, although it crashes horribly when we want to track two or three things at once. It’s hard to find words that are decently noncommittal about their values but that we aren’t going to confuse with each other.

So the alphabet works great for this. An individual letter doesn’t suggest any particular number, as long as we pretend ‘O’ and ‘I’ and ‘l’ don’t look like they do. But we also haven’t got any problem telling ‘x’ from ‘y’ unless our handwriting is bad. They’re quick to write and to say aloud, and they don’t require learning to write any new symbols.

Later, yes, letters do start picking up connotations. And sometimes we need more letters than the Roman alphabet allows. So we import from the Greek alphabet the letters that look different from their Roman analogues. That’s a bit exotic. But at least in a Western-European-based culture they aren’t completely novel. Mathematicians aren’t really trying to make this hard because, after all, they’re the ones who have to deal with the hard parts.

Bu Fisher’s Mutt and Jeff rerun for the 2nd is another of the basic-mathematics-wrong jokes. But it does get there by throwing out a baffling set of story-problem-starter points. Particularly interesting to me is Jeff’s protest in the first panel that they couldn’t have been doing 60 miles an hour as they hadn’t been out an hour. It’s the sort of protest easy to use as introduction to the ideas of average speed and instantaneous speed and, from that, derivatives.

48 Altered States

I saw this intriguing map produced by Brian Brettschneider.

He made it on and for Twitter, as best I can determine. I found it from a stray post in Usenet newsgroup soc.history.what-if, dedicated to ways history could have gone otherwise. It also covers ways that it could not possibly have gone otherwise but would be interesting to see happen. Very different United States state boundaries are part of the latter set of things.

The location of these boundaries is described in English and so comes out a little confusing. It’s hard to make concise. Every point in, say, this alternate Missouri is closer to Missouri’s capital of … uhm … Missouri City than it is to any other state’s capital. And the same for all the other states. All you kind readers who made it through my recent A To Z know a technical term for this. This is a Voronoi Diagram. It uses as its basis points the capitals of the (contiguous) United States.

It’s an amusing map. I mean amusing to people who can attach concepts like amusement to maps. It’d probably be a good one to use if someone needed to make a Risk-style grand strategy game map and didn’t want to be to beholden to the actual map.

No state comes out unchanged, although a few don’t come out too bad. Maine is nearly unchanged. Michigan isn’t changed beyond recognition. Florida gets a little weirder but if you showed someone this alternate shape they’d recognize the original. No such luck with alternate Tennessee or alternate Wyoming.

The connectivity between states changes a little. California and Arizona lose their border. Washington and Montana gain one; similarly, Vermont and Maine suddenly become neighbors. The “Four Corners” spot where Utah, Colorado, New Mexico, and Arizona converge is gone. Two new ones look like they appear, between New Hampshire, Massachusetts, Rhode Island, and Connecticut; and between Pennsylvania, Maryland, Virginia, and West Virginia. I would be stunned if that weren’t just because we can’t zoom far enough in on the map to see they’re actually a pair of nearby three-way junctions.

I’m impressed by the number of borders that are nearly intact, like those of Missouri or Washington. After all, many actual state boundaries are geographic features like rivers that a Voronoi Diagram doesn’t notice. How could Ohio come out looking anything like Ohio?

The reason comes to historical subtleties. At least once you get past the original 13 states, basically the east coast of the United States. The boundaries of those states were set by colonial charters, with boundaries set based on little or ambiguous information about what the local terrain was actually like, and drawn to reward or punish court factions and favorites. Never mind the original thirteen (plus Maine and Vermont, which we might as well consider part of the original thirteen).

After that, though, the United States started drawing state boundaries and had some method to it all. Generally a chunk of territory would be split into territories and later states that would be roughly rectangular, so far as practical, and roughly similar in size to the other states carved of the same area. So for example Missouri and Alabama are roughly similar to Georgia in size and even shape. Louisiana, Arkansas, and Missouri are about equal in north-south span and loosely similar east-to-west. Kansas, Nebraska, South Dakota, and North Dakota aren’t too different in their north-to-south or east-to-west spans.

There’s exceptions, for reasons tied to the complexities of history. California and Texas get peculiar shapes because they could. Michigan has an upper peninsula for quirky reasons that some friend of mine on Twitter discovers every three weeks or so. But the rough guide is that states look a lot more similar to one another than you’d think from a quick look. Mark Stein’s How The States Got Their Shapes is an endlessly fascinating text explaining this all.

If there is a loose logic to state boundaries, though, what about state capitals? Those are more quirky. One starts to see the patterns when considering questions like “why put California’s capital in Sacramento instead of, like, San Francisco?” or “Why Saint Joseph instead Saint Louis or Kansas City?” There is no universal guide, but there are some trends. Generally states end up putting their capitals in a city that’s relatively central, at least to the major population centers around the time of statehood. And, generally, not in one of the state’s big commercial or industrial centers. The desire to be geographically central is easy to understand. No fair making citizens trudge that far if they have business in the capital. Avoiding the (pardon) first tier of cities has subtler politics to it; it’s an attempt to get the government somewhere at least a little inconvenient to the money powers.

There’s exceptions, of course. Boston is the obviously important city in Massachusetts, Salt Lake City the place of interest for Utah, Denver the equivalent for Colorado. Capitals relocated; Atlanta is Georgia’s eighth(?) I think since statehood. Sometimes they were weirder. Until 1854 Rhode Island rotated between five cities, to the surprise of people trying to name a third city in Rhode Island. New Jersey settled on Trenton as compromise between the East and West Jersey capitals of Perth Amboy and Burlington. But if you look for a city that’s fairly central but not the biggest in the state you get to the capital pretty often.

So these are historical and cultural factors which combine to make a Voronoi Diagram map of the United States strange, but not impossibly strange, compared to what has really happened. Things are rarely so arbitrary as they seem at first.

• Matthew Wright 6:49 pm on Tuesday, 17 January, 2017 Permalink | Reply

New Zealand’s provincial borders were devised at much the same time as the midwestern and western US and in much the same way. Some guy with a map that only vaguely showed rivers, and a ruler. Well, when I say ‘some guy’ I mean George Grey, Edward Eyre and their factotum, Alfred Domett among only a handful of others. Early colonial New Zealand was like that. The civil service consisted of about three people (all of them Domett) and because the franchise system meant some voting districts might have as few as 25 electors, anybody had at least a 50/50 chance of becoming Prime Minister.

Like

• Joseph Nebus 3:45 pm on Saturday, 21 January, 2017 Permalink | Reply

I am intrigued and delighted to learn this! For all that I do love maps and seeing how borders evolve over time I’m stronger on United States and Canadian province borders; they’re just what was easily available when I grew up. (Well, and European boundaries, but I don’t think there’s a single one of them that’s based on anything more than “this is where the armies stood on V-E Day”.)

Would you have a recommendation on a pop history of New Zealand for someone who knows only, mostly, that I guess confederation with Australia was mooted in 1900 but refused since the islands are actually closer to the Scilly Isles than they are Canberra for crying out loud?

Liked by 1 person

• Matthew Wright 8:43 pm on Saturday, 21 January, 2017 Permalink | Reply

Europe has had so many boundary changes since Roman times that I wouldn’t be surprised if there’s a tradition for governments to issue people with an eraser and pot of paint to update their maps – and, no question, their history IS the history of those boundary changes. Certainly it explains their wars…

On matters NZ, I wrote just such a book – it was first published in 2004 and has been through a couple of editions (I updated it in 2012). My publishers, Bateman, put it up on Kindle:

It’s ‘publisher priced’ but I’d thoroughly recommend it! :-) The parallels between NZ’s settler period and the US ‘midwestern’ expansion through to California at the same time are direct.

The reasons why NZ never joined Australia in 1900 have been endlessly debated and never answered but probably had something to do with the way NZ was socially re-identifying itself with Britain at the time. The British ignored the whole thing for defence/strategic purposes, deploying just one RN squadron to Sydney as the ‘mid point’ of Australasia. Sydney-siders liked it, but everybody from Perth to Wellington was annoyed. I wrote my thesis on the political outcome, way back when.

Like

• Joseph Nebus 6:19 am on Saturday, 28 January, 2017 Permalink | Reply

Aw, thank you kindly! I’d thought you might have something suitable.

The organizing of territory that white folks told themselves was unsettled is a process I find interesting, I suppose because I’ve always wondered about how one goes about establishing systems. I think it’s similar to my interest in how nations devastated by wars get stuff like trash collection and fire departments and regional power systems running again. The legal system for at least how the United States organized territory is made clear enough in public schools (at least to students who pay attention, like me), but it isn’t easy to find the parallel processes in other countries. Now and then I try reading about Canada and how two of every seven sections of land in (now) Quebec and Ontario was reserved to the church and then I pass out and by the time I wake up again they’re making infrastructure promises to Prince Edward Island.

I’m not surprised that from the British side of things the organization of New Zealand and Australia amounted to a bit of afterthought and trusting things would work out all right. I have read a fair bit (for an American) about the British Empire and it does feel like all that was ever thought about was India and the route to India and an ever-widening corridor of imagined weak spots on the route to India. The rest of the world was, pick some spot they had already, declare it “the Gibraltar of [ Geographic Region ]” and suppose there’d be a ship they could send there if they really had to.

Like

Reading the Comics, January 7, 2016: Just Before GoComics Breaks Everything Edition

Most of the comics I review here are printed on GoComics.com. Well, most of the comics I read online are from there. But even so I think they have more comic strips that mention mathematical themes. Anyway, they’re unleashing a complete web site redesign on Monday. I don’t know just what the final version will look like. I know that the beta versions included the incredibly useful, that is to say dumb, feature where if a particular comic you do read doesn’t have an update for the day — and many of them don’t, as they’re weekly or three-times-a-week or so — then it’ll show some other comic in its place. I mean, the idea of encouraging people to find new comics is a good one. To some extent that’s what I do here. But the beta made no distinction between “comic you don’t read because you never heard of Microcosm” and “comic you don’t read because glancing at it makes your eyes bleed”. And on an idiosyncratic note, I read a lot of comics. I don’t need to see Dude and Dude reruns in fourteen spots on my daily comics page, even if I didn’t mind it to start.

Anyway. I am hoping, desperately hoping, that with the new site all my old links to comics are going to keep working. If they don’t then I suppose I’m just ruined. We’ll see. My suggestion is if you’re at all curious about the comics you read them today (Sunday) just to be safe.

Ashleigh Brilliant’s Pot-Shots is a curious little strip I never knew of until GoComics picked it up a few years ago. Its format is compellingly simple: a little illustration alongside a wry, often despairing, caption. I love it, but I also understand why was the subject of endless queries to the Detroit Free Press (Or Whatever) about why was this thing taking up newspaper space. The strip rerun the 31st of December is a typical example of the strip and amuses me at least. And it uses arithmetic as the way to communicate reasoning, both good and bad. Brilliant’s joke does address something that logicians have to face, too. Whether an argument is logically valid depends entirely on its structure. If the form is correct the reasoning may be excellent. But to be sound an argument has to be correct and must also have its assumptions be true. We can separate whether an argument is right from whether it could ever possibly be right. If you don’t see the value in that, you have never participated in an online debate about where James T Kirk was born and whether Spock was the first Vulcan in Star Fleet.

Thom Bluemel’s Birdbrains for the 2nd of January, 2017, is a loaded-dice joke. Is this truly mathematics? Statistics, at least? Close enough for the start of the year, I suppose. Working out whether a die is loaded is one of the things any gambler would like to know, and that mathematicians might be called upon to identify or exploit. (I had a grandmother unshakably convinced that I would have some natural ability to beat the Atlantic City casinos if she could only sneak the underaged me in. I doubt I could do anything of value there besides see the stage magic show.)

Jack Pullan’s Boomerangs rerun for the 2nd is built on the one bit of statistical mechanics that everybody knows, that something or other about entropy always increasing. It’s not a quantum mechanics rule, but it’s a natural confusion. Quantum mechanics has the reputation as the source of all the most solid, irrefutable laws of the universe’s working. Statistical mechanics and thermodynamics have this musty odor of 19th-century steam engines, no matter how much there is to learn from there. Anyway, the collapse of systems into disorder is not an irrevocable thing. It takes only energy or luck to overcome disorderliness. And in many cases we can substitute time for luck.

Scott Hilburn’s The Argyle Sweater for the 3rd is the anthropomorphic-geometry-figure joke that’s I’ve been waiting for. I had thought Hilburn did this all the time, although a quick review of Reading the Comics posts suggests he’s been more about anthropomorphic numerals the past year. This is why I log even the boring strips: you never know when I’ll need to check the last time Scott Hilburn used “acute” to mean “cute” in reference to triangles.

Mike Thompson’s Grand Avenue uses some arithmetic as the visual cue for “any old kind of schoolwork, really”. Steve Breen’s name seems to have gone entirely from the comic strip. On Usenet group rec.arts.comics.strips Brian Henke found that Breen’s name hasn’t actually been on the comic strip since May, and D D Degg found a July 2014 interview indicating Thompson had mostly taken the strip over from originator Breen.

Mark Anderson’s Andertoons for the 5th is another name-drop that doesn’t have any real mathematics content. But come on, we’re talking Andertoons here. If I skipped it the world might end or something untoward like that.

Ted Shearer’s Quincy for the 14th of November, 1977, and reprinted the 7th of January, 2017. I kind of remember having a lamp like that. I don’t remember ever sitting down to do my mathematics homework with a paintbrush.

Ted Shearer’s Quincy for the 14th of November, 1977, doesn’t have any mathematical content really. Just a mention. But I need some kind of visual appeal for this essay and Shearer is usually good for that.

Corey Pandolph, Phil Frank, and Joe Troise’s The Elderberries rerun for the 7th is also a very marginal mention. But, what the heck, it’s got some of your standard wordplay about angles and it’ll get this week’s essay that much closer to 800 words.

Reading the Comics, December 30, 2016: New Year’s Eve Week Edition

So last week, for schedule reasons, I skipped the Christmas Eve strips and promised to get to them this week. There weren’t any Christmas Eve mathematically-themed comic strips. Figures. This week, I need to skip New Year’s Eve comic strips for similar schedule reasons. If there are any, I’ll talk about them next week.

Lorie Ransom’s The Daily Drawing for the 28th is a geometry wordplay joke for this installment. Two of them, when you read the caption.

John Graziano’s Ripley’s Believe It or Not for the 28th presents the quite believable claim that Professor Dwight Barkley created a formula to estimate how long it takes a child to ask “are we there yet?” I am skeptical the equation given means all that much. But it’s normal mathematician-type behavior to try modelling stuff. That will usually start with thinking of what one wants to represent, and what things about it could be measured, and how one expects these things might affect one another. There’s usually several plausible-sounding models and one has to select the one or ones that seem likely to be interesting. They have to be simple enough to calculate, but still interesting. They need to have consequences that aren’t obvious. And then there’s the challenge of validating the model. Does its description match the thing we’re interested in well enough to be useful? Or at least instructive?

Len Borozinski’s Speechless for the 28th name-drops Albert Einstein and the theory of relativity. Marginal mathematical content, but it’s a slow week.

John Allison’s Bad Machinery for the 29th mentions higher dimensions. More dimensions. In particular it names ‘ana’ and ‘kata’ as “the weird extra dimensions”. Ana and kata are a pair of directions coined by the mathematician Charles Howard Hinton to give us a way of talking about directions in hyperspace. They echo the up/down, left/right, in/out pairs. I don’t know that any mathematicians besides Rudy Rucker actually use these words, though, and that in his science fiction. I may not read enough four-dimensional geometry to know the working lingo. Hinton also coined the “tesseract”, which has escaped from being a mathematician’s specialist term into something normal people might recognize. Mostly because of Madeline L’Engle, I suppose, but that counts.

Samson’s Dark Side of the Horse for the 29th is Dark Side of the Horse‘s entry this essay. It’s a fun bit of play on counting, especially as a way to get to sleep.

John Graziano’s Ripley’s Believe It or Not for the 29th mentions a little numbers and numerals project. Or at least representations of numbers. Finding other orders for numbers can be fun, and it’s a nice little pastime. I don’t know there’s an important point to this sort of project. But it can be fun to accomplish. Beautiful, even.

Mark Anderson’s Andertoons for the 30th relieves us by having a Mark Anderson strip for this essay. And makes for a good Roman numerals gag.

Ryan Pagelow’s Buni for the 30th can be counted as an anthropomorphic-numerals joke. I know it’s more of a “ugh 2016 was the worst year” joke, but it parses either way.

John Atkinson’s Wrong Hands for the 30th is an Albert Einstein joke. It’s cute as it is, though.

The End 2016 Mathematics A To Z: Yang Hui’s Triangle

Today’s is another request from gaurish and another I’m glad to have as it let me learn things too. That’s a particularly fun kind of essay to have here.

Yang Hui’s Triangle.

It’s a triangle. Not because we’re interested in triangles, but because it’s a particularly good way to organize what we’re doing and show why we do that. We’re making an arrangement of numbers. First we need cells to put the numbers in.

Start with a single cell in what’ll be the top middle of the triangle. It spreads out in rows beneath that. The rows are staggered. The second row has two cells, each one-half width to the side of the starting one. The third row has three cells, each one-half width to the sides of the row above, so that its center cell is directly under the original one. The fourth row has four cells, two of which are exactly underneath the cells of the second row. The fifth row has five cells, three of them directly underneath the third row’s cells. And so on. You know the pattern. It’s the one that pins in a plinko board take. Just trimmed down to a triangle. Make as many rows as you find interesting. You can always add more later.

In the top cell goes the number ‘1’. There’s also a ‘1’ in the leftmost cell of each row, and a ‘1’ in the rightmost cell of each row.

What of interior cells? The number for those we work out by looking to the row above. Take the cells to the immediate left and right of it. Add the values of those together. So for example the center cell in the third row will be ‘1’ plus ‘1’, commonly regarded as ‘2’. In the third row the leftmost cell is ‘1’; it always is. The next cell over will be ‘1’ plus ‘2’, from the row above. That’s ‘3’. The cell next to that will be ‘2’ plus ‘1’, a subtly different ‘3’. And the last cell in the row is ‘1’ because it always is. In the fourth row we get, starting from the left, ‘1’, ‘4’, ‘6’, ‘4’, and ‘1’. And so on.

It’s a neat little arithmetic project. It has useful application beyond the joy of making something neat. Many neat little arithmetic projects don’t have that. But the numbers in each row give us binomial coefficients, which we often want to know. That is, if we wanted to work out (a + b) to, say, the third power, we would know what it looks like from looking at the fourth row of Yanghui’s Triangle. It will be $1\cdot a^4 + 4\cdot a^3 \cdot b^1 + 6\cdot a^2\cdot b^2 + 4\cdot a^1\cdot b^3 + 1\cdot b^4$. This turns up in polynomials all the time.

Look at diagonals. By diagonal here I mean a line parallel to the line of ‘1’s. Left side or right side; it doesn’t matter. Yang Hui’s triangle is bilaterally symmetric around its center. The first diagonal under the edges is a bit boring but familiar enough: 1-2-3-4-5-6-7-et cetera. The second diagonal is more curious: 1-3-6-10-15-21-28 and so on. You’ve seen those numbers before. They’re called the triangular numbers. They’re the number of dots you need to make a uniformly spaced, staggered-row triangle. Doodle a bit and you’ll see. Or play with coins or pool balls.

The third diagonal looks more arbitrary yet: 1-4-10-20-35-56-84 and on. But these are something too. They’re the tetrahedronal numbers. They’re the number of things you need to make a tetrahedron. Try it out with a couple of balls. Oranges if you’re bored at the grocer’s. Four, ten, twenty, these make a nice stack. The fourth diagonal is a bunch of numbers I never paid attention to before. 1-5-15-35-70-126-210 and so on. This is — well. We just did tetrahedrons, the triangular arrangement of three-dimensional balls. Before that we did triangles, the triangular arrangement of two-dimensional discs. Do you want to put in a guess what these “pentatope numbers” are about? Sure, but you hardly need to. If we’ve got a bunch of four-dimensional hyperspheres and want to stack them in a neat triangular pile we need one, or five, or fifteen, or so on to make the pile come out neat. You can guess what might be in the fifth diagonal. I don’t want to think too hard about making triangular heaps of five-dimensional hyperspheres.

There’s more stuff lurking in here, waiting to be decoded. Add the numbers of, say, row four up and you get two raised to the third power. Add the numbers of row ten up and you get two raised to the ninth power. You see the pattern. Add everything in, say, the top five rows together and you get the fifth Mersenne number, two raised to the fifth power (32) minus one (31, when we’re done). Add everything in the top ten rows together and you get the tenth Mersenne number, two raised to the tenth power (1024) minus one (1023).

Or add together things on “shallow diagonals”. Start from a ‘1’ on the outer edge. I’m going to suppose you started on the left edge, but remember symmetry; it’ll be fine if you go from the right instead. Add to that ‘1’ the number you get by moving one cell to the right and going up-and-right. And then again, go one cell to the right and then one cell up-and-right. And again and again, until you run out of cells. You get the Fibonacci sequence, 1-1-2-3-5-8-13-21-and so on.

We can even make an astounding picture from this. Take the cells of Yang Hui’s triangle. Color them in. One shade if the cell has an odd number, another if the cell has an even number. It will create a pattern we know as the Sierpiński Triangle. (Wacław Sierpiński is proving to be the surprise special guest star in many of this A To Z sequence’s essays.) That’s the fractal of a triangle subdivided into four triangles with the center one knocked out, and the remaining triangles them subdivided into four triangles with the center knocked out, and on and on.

By now I imagine even my most skeptical readers agree this is an interesting, useful mathematical construct. Also that they’re wondering why I haven’t said the name “Blaise Pascal”. The Western mathematical tradition knows of this from Pascal’s work, particularly his 1653 Traité du triangle arithmétique. But mathematicians like to say their work is universal, and independent of the mere human beings who find it. Constructions like this triangle give support to this. Yang lived in China, in the 12th century. I imagine it possible Pascal had hard of his work or been influenced by it, by some chain, but I know of no evidence that he did.

And even if he had, there are other apparently independent inventions. The Avanti Indian astronomer-mathematician-astrologer Varāhamihira described the addition rule which makes the triangle work in commentaries written around the year 500. Omar Khayyám, who keeps appearing in the history of science and mathematics, wrote about the triangle in his 1070 Treatise on Demonstration of Problems of Algebra. Again so far as I am aware there’s not a direct link between any of these discoveries. They are things different people in different traditions found because the tools — arithmetic and aesthetically-pleasing orders of things — were ready for them.

Yang Hui wrote about his triangle in the 1261 book Xiangjie Jiuzhang Suanfa. In it he credits the use of the triangle (for finding roots) was invented around 1100 by mathematician Jia Xian. This reminds us that it is not merely mathematical discoveries that are found by many peoples at many times and places. So is Boyer’s Law, discovered by Hubert Kennedy.

• gaurish 6:46 pm on Thursday, 29 December, 2016 Permalink | Reply

This is first time that I have read an article about Pascal triangle without a picture of it in front of me and could still imagine it in my mind. :)

Like

• Joseph Nebus 5:22 am on Thursday, 5 January, 2017 Permalink | Reply

Thank you; I’m glad you like it. I did spend a good bit of time before writing the essay thinking about why it is a triangle that we use for this figure, and that helped me think about how things are organized and why. (The one thing I didn’t get into was identifying the top row, the single cell, as row zero. Computers may index things starting from zero and there may be fair reasons to do it, but that is always going to be a weird choice for humans.)

Liked by 1 person

Reading the Comics, December 17, 2016: Sleepy Week Edition

Comic Strip Master Command sent me a slow week in mathematical comics. I suppose they knew I was on somehow a busier schedule than usual and couldn’t spend all the time I wanted just writing. I appreciate that but don’t want to see another of those weeks when nothing qualifies. Just a warning there.

John Rose’s Barney Google and Snuffy Smith for the 12th of December, 2016. I appreciate the desire to pay attention to continuity that makes Rose draw in the coffee cup both panels, but Snuffy Smith has to swap it from one hand to the other to keep it in view there. Not implausible, just kind of busy. Also I can’t fault Jughaid for looking at two pages full of unillustrated text and feeling lost. That’s some Bourbaki-grade geometry going on there.

John Rose’s Barney Google and Snuffy Smith for the 12th is a bit of mathematical wordplay. It does use geometry as the “hard mathematics we don’t know how to do”. That’s a change from the usual algebra. And that’s odd considering the joke depends on an idiom that is actually used by real people.

Patrick Roberts’s Todd the Dinosaur for the 12th uses mathematics as the classic impossibly hard subject a seven-year-old can’t be expected to understand. The worry about fractions seems age-appropriate. I don’t know whether it’s fashionable to give elementary school students experience thinking of ‘x’ and ‘y’ as numbers. I remember that as a time when we’d get a square or circle and try to figure what number fits in the gap. It wasn’t a 0 or a square often enough.

Patrick Roberts’s Todd the Dinosaur for the 12th of December, 2016. Granting that Todd’s a kid dinosaur and that T-Rexes are not renowned for the hugeness of their arms, wouldn’t that still be enough space for a lot of text to fit around? I would have thought so anyway. I feel like I’m pluralizing ‘T-Rex’ wrong, but what would possibly be right? ‘Ts-rex’? Don’t make me try to spell tyrannosaurus.

Jef Mallett’s Frazz for the 12th uses one of those great questions I think every child has. And it uses it to question how we can learn things from statistical study. This is circling around the “Bayesian” interpretation of probability, of what odds mean. It’s a big idea and I’m not sure I’m competent to explain it. It amounts to asking what explanations would be plausibly consistent with observations. As we get more data we may be able to rule some cases in or out. It can be unsettling. It demands we accept right up front that we may be wrong. But it lets us find reasonably clean conclusions out of the confusing and muddy world of actual data.

Sam Hepburn’s Questionable Quotebook for the 14th illustrates an old observation about the hypnotic power of decimal points. I think Hepburn’s gone overboard in this, though: six digits past the decimal in this percentage is too many. It draws attention to the fakeness of the number. One, two, maybe three digits past the decimal would have a more authentic ring to them. I had thought the John Allen Paulos tweet above was about this comic, but it’s mere coincidence. Funny how that happens.

Reading the Comics, December 5, 2016: Cameo Appearances Edition

Comic Strip Master Command sent a bunch of strips my way this past week. They’ll get out to your way over this week. The first bunch are all on Gocomics.com, so I don’t feel quite fair including the strips themselves. This set also happens to be a bunch in which mathematics gets a passing mention, or is just used because they need some subject and mathematics is easy to draw into a joke. That’s all right.

Jef Mallet’s Frazz for the 4th uses blackboard arithmetic and the iconic minor error of arithmetic. It’s also strikingly well-composed; look at the art from a little farther away. Forgetting to carry the one is maybe a perfect minor error for this sort of thing. Everyone does it, experienced mathematicians included. It’s very gradable. When someone’s learning arithmetic making this mistake is considered evidence that someone doesn’t know how to add. When someone’s learned it, making the mistake isn’t considered evidence the person doesn’t know how to add. A lot of mistakes work that way, somehow.

Rick Stromoski’s Soup to Nutz for the 4th name-drops Fundamentals of Algebra as a devilish, ban-worthy book. Everyone feels that way. Mathematics majors get that way around two months in to their Introduction To Not That Kind Of Algebra course too. I doubt Stromoski has any particular algebra book in mind, but it doesn’t matter. The convention in mathematics books is to make titles that are ruthlessly descriptive, with not a touch of poetry to them. Among the mathematics books I have on my nearest shelf are Resnikoff and Wells’s Mathematics in Civilization; Koks’ Explorations in Mathematical Physics: The Concepts Behind An Elegant Language; Enderton’s A Mathematical Introduction To Logic; Courant, Robbins, and Stewart’s What Is Mathematics?; Murasagi’s Knot Theory And Its Applications; Nishimori’s Statistical Physics of Spin Glasses and Information Processing; Brush’s The Kind Of Motion We Call Heat, and so on. Only the Brush title has the slightest poetry to it, and it’s a history (of thermodynamics and statistical mechanics). The Courant/Robbins/Stewart has a title you could imagine on a bookstore shelf, but it’s also in part a popularization.

It’s the convention, and it’s all right in its domain. If you are deep in the library stacks and don’t know what a books is about, the spine will tell you what the subject is. You might not know what level or depth the book is in, but you’ll know what the book is. The down side is if you remember having liked a book but not who wrote it you’re lost. Methods of Functional Analysis? Techniques in Modern Functional Analysis? … You could probably make a bingo game out of mathematics titles.

Johnny Hart’s Back to B.C. for the 5th, a rerun from 1959, plays on the dawn of mathematics and the first thoughts of parallel lines. If parallel lines stir feelings in people they’re complicated feelings. One’s either awed at the resolute and reliable nature of the lines’ interaction, or is heartbroken that the things will never come together (or, I suppose, break apart). I can feel both sides of it.

Dave Blazek’s Loose Parts for the 5th features the arithmetic blackboard as inspiration for a prank. It’s the sort of thing harder to do with someone’s notes for an English essay. But, to spoil the fun, I have to say in my experience something fiddled with in the middle of a board wouldn’t even register. In much the way people will read over typos, their minds seeing what should be there instead of what is, a minor mathematical error will often not be seen. The mathematician will carry on with what she thought should be there. Especially if the error is a few lines back of the latest work. Not always, though, and when it doesn’t it’s a heck of a problem. (And here I am thinking of the week, the week, I once spent stymied by a problem because I was differentiating the function ex wrong. The hilarious thing here is it is impossible to find something easier to differentiate than ex. After you differentiate it correctly you get ex. An advanced squirrel could do it right, and here I was in grad school doing it wrong.)

Nate Creekmore’s Maintaining for the 5th has mathematics appear as the sort of homework one does. And a word problem that uses coins for whatever work it does. Coins should be good bases for word problems. They’re familiar enough and people do think about them, and if all else fails someone could in principle get enough dimes and quarters and just work it out by hand.

Sam Hepburn’s Questionable Quotebook for the 5th uses a blackboard full of mathematics to signify a monkey’s extreme intelligence. There’s a little bit of calculus in there, an appearance of “$\frac{df}{dx}$” and a mention of the limit. These are things you get right up front of a calculus course. They’ll turn up in all sorts of problems you try to do.

Charles Schulz’s Peanuts for the 5th is not really about mathematics. Peppermint Patty just mentions it on the way to explaining the depths of her not-understanding stuff. But it’s always been one of my favorite declarations of not knowing what’s going on so I do want to share it. The strip originally ran the 8th of December, 1969.

The End 2016 Mathematics A To Z: Principal

Functions. They’re at the center of so much mathematics. They have three pieces: a domain, a range, and a rule. The one thing functions absolutely must do is match stuff in the domain to one and only one thing in the range. So this is where it gets tricky.

Principal.

Thing with this one-and-only-one thing in the range is it’s not always practical. Sometimes it only makes sense to allow for something in the domain to match several things in the range. For example, suppose we have the domain of positive numbers. And we want a function that gives us the numbers which, squared, are whatever the original function was. For any positive real number there’s two numbers that do that. 4 should match to both +2 and -2.

You might ask why I want a function that tells me the numbers which, squared, equal something. I ask back, what business is that of yours? I want a function that does this and shouldn’t that be enough? We’re getting off to a bad start here. I’m sorry; I’ve been running ragged the last few days. I blame the flat tire on my car.

Anyway. I’d want something like that function because I’m looking for what state of things makes some other thing true. This turns up often in “inverse problems”, problems in which we know what some measurement is and want to know what caused the measurement. We do that sort of problem all the time.

We can handle these multi-valued functions. Of course we can. Mathematicians are as good at loopholes as anyone else is. Formally we declare that the range isn’t the real numbers but rather sets of real numbers. My what-number-squared function then matches ‘4’ in the domain to the set of numbers ‘+2 and -2’. The set has several things in it, but there’s just the one set. Clever, huh?

This sort of thing turns up a lot. There’s two numbers that, squared, give us any real number (except zero). There’s three numbers that, squared, give us any real number (again except zero). Polynomials might have a whole bunch of numbers that make some equation true. Trig functions are worse. The tangent of 45 degrees equals 1. So is the tangent of 225 degrees. Also 405 degrees. Also -45 degrees. Also -585 degrees. OK, a mathematician would use radians instead of degrees, but that just changes what the numbers are. Not that there’s infinitely many of them.

It’s nice to have options. We don’t always want options. Sometimes we just want one blasted simple answer to things. It’s coded into the language. We say “the square root of four”. We speak of “the arctangent of 1”, which is to say, “the angle with tangent of 1”. We only say “all square roots of four” if we’re making a point about overlooking options.

If we’ve got a set of things, then we can pick out one of them. This is obvious, which means it is so very hard to prove. We just have to assume we can. Go ahead; assume we can. Our pick of the one thing out of this set is the “principal”. It’s not any more inherently right than the other possibilities. It’s just the one we choose to grab first.

So. The principal square root of four is positive two. The principal arctangent of 1 is 45 degrees, or in the dialect of mathematicians π divided by four. We pick these values over other possibilities because they’re nice. What makes them nice? Well, they’re nice. Um. Most of their numbers aren’t that big. They use positive numbers if we have a choice in the matter. Deep down we still suspect negative numbers of being up to something.

If nobody says otherwise then the principal square root is the positive one, or the one with a positive number in front of the imaginary part. If nobody says otherwise the principal arcsine is between -90 and +90 degrees (-π/2 and π/2). The principal arccosine is between 0 and 180 degrees (0 and π), unless someone says otherwise. The principal arctangent is … between -90 and 90 degrees, unless it’s between 0 and 180 degrees. You can count on the 0 to 90 part. Use your best judgement and roll with whatever develops for the other half of the range there. There’s not one answer that’s right for every possible case. The point of a principal value is to pick out one answer that’s usually a good starting point.

When you stare at what it means to be a function you realize that there’s a difference between the original function and the one that returns the principal value. The original function has a range that’s “sets of values”. The principal-value version has a range that’s just one value. If you’re being kind to your audience you make some note of that. Usually we note this by capitalizing the start of the function: “arcsin z” gives way to “Arcsin z”. “Log z” would be the principal-value version of “log z”. When you start pondering logarithms for negative numbers or for complex-valued numbers you get multiple values. It’s the same way that the arcsine function does.

And it’s good to warn your audience which principal value you mean, especially for the arc-trigonometric-functions or logarithms. (I’ve never seen someone break the square root convention.) The principal value is about picking the most obvious and easy-to-work-with value out of a set of them. It’s just impossible to get everyone to agree on what the obvious is.

The End 2016 Mathematics A To Z: Osculating Circle

I’m happy to say it’s another request today. This one’s from HowardAt58, author of the Saving School Math blog. He’s given me some great inspiration in the past.

Osculating Circle.

It’s right there in the name. Osculating. You know what that is from that one Daffy Duck cartoon where he cries out “Greetings, Gate, let’s osculate” while wearing a moustache. Daffy’s imitating somebody there, but goodness knows who. Someday the mystery drives the young you to a dictionary web site. Osculate means kiss. This doesn’t seem to explain the scene. Daffy was imitating Jerry Colonna. That meant something in 1943. You can find him on old-time radio recordings. I think he’s funny, in that 40s style.

Make the substitution. A kissing circle. Suppose it’s not some playground antic one level up from the Kissing Bandit that plagues recess yet one or two levels down what we imagine we’d do in high school. It suggests a circle that comes really close to something, that touches it a moment, and then goes off its own way.

But then touching. We know another word for that. It’s the root behind “tangent”. Tangent is a trigonometry term. But it appears in calculus too. The tangent line is a line that touches a curve at one specific point and is going in the same direction as the original curve is at that point. We like this because … well, we do. The tangent line is a good approximation of the original curve, at least at the tangent point and for some region local to that. The tangent touches the original curve, and maybe it does something else later on. What could kissing be?

The osculating circle is about approximating an interesting thing with a well-behaved thing. So are similar things with names like “osculating curve” or “osculating sphere”. We need that a lot. Interesting things are complicated. Well-behaved things are understood. We move from what we understand to what we would like to know, often, by an approximation. This is why we have tangent lines. This is why we build polynomials that approximate an interesting function. They share the original function’s value, and its derivative’s value. A polynomial approximation can share many derivatives. If the function is nice enough, and the polynomial big enough, it can be impossible to tell the difference between the polynomial and the original function.

The osculating circle, or sphere, isn’t so concerned with matching derivatives. I know, I’m as shocked as you are. Well, it matches the first and the second derivatives of the original curve. Anything past that, though, it matches only by luck. The osculating circle is instead about matching the curvature of the original curve. The curvature is what you think it would be: it’s how much a function curves. If you imagine looking closely at the original curve and an osculating circle they appear to be two arcs that come together. They must touch at one point. They might touch at others, but that’s incidental.

Osculating circles, and osculating spheres, sneak out of mathematics and into practical work. This is because we often want to work with things that are almost circles. The surface of the Earth, for example, is not a sphere. But it’s only a tiny bit off. It’s off in ways that you only notice if you are doing high-precision mapping. Or taking close measurements of things in the sky. Sometimes we do this. So we map the Earth locally as if it were a perfect sphere, with curvature exactly what its curvature is at our observation post.

Or we might be observing something moving in orbit. If the universe had only two things in it, and they were the correct two things, all orbits would be simple: they would be ellipses. They would have to be “point masses”, things that have mass without any volume. They never are. They’re always shapes. Spheres would be fine, but they’re never perfect spheres even. The slight difference between a perfect sphere and whatever the things really are affects the orbit. Or the other things in the universe tug on the orbiting things. Or the thing orbiting makes a course correction. All these things make little changes in the orbiting thing’s orbit. The actual orbit of the thing is a complicated curve. The orbit we could calculate is an osculating — well, an osculating ellipse, rather than an osculating circle. Similar idea, though. Call it an osculating orbit if you’d rather.

That osculating circles have practical uses doesn’t mean they aren’t respectable mathematics. I’ll concede they’re not used as much as polynomials or sine curves are. I suppose that’s because polynomials and sine curves have nicer derivatives than circles do. But osculating circles do turn up as ways to try solving nonlinear differential equations. We need the help. Linear differential equations anyone can solve. Nonlinear differential equations are pretty much impossible. They also turn up in signal processing, as ways to find the frequencies of a signal from a sampling of data. This, too, we would like to know.

We get the name “osculating circle” from Gottfried Wilhelm Leibniz. This might not surprise. Finding easy-to-understand shapes that approximate interesting shapes is why we have calculus. Isaac Newton described a way of making them in the Principia Mathematica. This also might not surprise. Of course they would on this subject come so close together without kissing.

The End 2016 Mathematics A To Z: Monster Group

Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

Monster Group.

It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

(Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

• gaurish 9:17 am on Saturday, 10 December, 2016 Permalink | Reply

It’s a shame that I somehow missed this blog post. Have you read “Symmetry and the Monster,”? Will you recommend reading it?

Like

• Joseph Nebus 5:57 am on Saturday, 17 December, 2016 Permalink | Reply

Not to fear. Given how I looked away a moment and got fourteen days behind writing comments I can’t fault anyone for missing a post or two here.

I haven’t read Symmetry and the Monster, but from Dr Ronan’s web site about the Monster Group I’m interested and mean to get to it when I find a library copy. I keep getting farther behind in my reading, admittedly. Today I realized I’d rather like to read Dan Bouk’s How Our Days Became Numbered: Risk and the Rise of the Statistical Individual, which focuses in large part on the growth of the life insurance industry in the 19th century. And even so I just got a book about the sale of timing data that was so common back when standard time was being discovered-or-invented.

Like

The End 2016 Mathematics A To Z: Jordan Curve

I realize I used this thing in one of my Theorem Thursday posts but never quite said what it was. Let me fix that.

Jordan Curve

Get a rubber band. Well, maybe you can’t just now, even if you wanted to after I gave orders like that. Imagine a rubber band. I apologize to anyone so offended by my imperious tone that they’re refusing. It’s the convention for pop mathematics or science.

Anyway, take your rubber band. Drop it on a table. Fiddle with it so it hasn’t got any loops in it and it doesn’t twist over any. I want the whole of one edge of the band touching the table. You can imagine the table too. That is a Jordan Curve, at least as long as the rubber band hasn’t broken.

This may not look much like a circle. It might be close, but I bet it’s got some wriggles in its curves. Maybe it even curves so much the thing looks more like a kidney bean than a circle. Maybe it pinches so much that it looks like a figure eight, a couple of loops connected by a tiny bridge on the interior. Doesn’t matter. You can bring out the circle. Put your finger inside the rubber band’s loops and spiral your finger around. Do this gently and the rubber band won’t jump off the table. It’ll round out to as perfect a circle as the limitations of matter allow.

And for that matter, if we wanted, we could take a rubber band laid down as a perfect circle. Then nudge it here and push it there and wrinkle it up into as complicated a figure as you like. Either way is as possible.

A Jordan Curve is a closed curve, a curve that loops around back to itself. And it’s simple. That is, it doesn’t cross over itself at any point. However weird and loopy this figure is, as long as it doesn’t cross over itself, it’s got in a sense the same shape as a circle. We can imagine a function that matches every point on a true circle to a point on the Jordan Curve. A set of points in order on the original circle will match to points in the same order on the Jordan Curve. There’s nothing missing and there’s no jumps or ambiguous points. And no point on the Jordan Curve matches to two or more on the original circle. (This is why we don’t let the curve to cross over itself.)

When I wrote about the Jordan Curve Theorem it was about how to tell how a curve divides a plane into two pieces, an inside and an outside. You can have some pretty complicated-looking figures. I have an example on the Jordan Curve Theorem essay, but you can make your own by doodling. And we can look at it as a circle, as a rubber band, twisted all around.

This all dips into topology, the study of how shapes connect when we don’t care about distance. But there are simple wondrous things to find about them. For example. Draw a Jordan Curve, please. Any that you like. Now draw a triangle. Again, any that you like.

There is some trio of points in your Jordan Curve which connect to a triangle the same shape as the one you drew. It may be bigger than your triangle, or smaller. But it’ll look similar. The angles inside will all be the same as the ones you started with. This should help make doodling during a dull meeting even more exciting.

There may be four points on your Jordan Curve that make a square. I don’t know. Nobody knows for sure. There certainly are if your curve is convex, that is, if no line between any two points on the curve goes outside the curve. And it’s true even for curves that aren’t complex if they are smooth enough. But generally? For an arbitrary curve? We don’t know. It might be true. It might be impossible to find a square in some Jordan Curve. It might be the Jordan Curve you drew. Good luck looking.

• gaurish 3:52 am on Thursday, 24 November, 2016 Permalink | Reply

Jordan curve theorem is again in news: http://wp.me/p3qzP-2tV

Like

• Joseph Nebus 11:11 pm on Friday, 25 November, 2016 Permalink | Reply

Ooh, thank you, that’s interesting stuff. And on that conjecture about squares, too, which is so neat.

Liked by 1 person

Reading the Comics, November 16, 2016: Seeing the Return of Jokes

Comic Strip Master Command sent out a big mass of comics this past week. Today’s installment will only cover about half of them. This half does feature a number of comics that show off jokes that’ve run here before. I’m sure it was coincidence. Comic Strip Master Command must have heard I was considering alerting cartoonists that I was talking about them. That’s fine for something like last week when I could talk about NP-complete problems or why we call something a “hypotenuse”. It can start a conversation. But “here’s a joke treating numerals as if they were beings”? All they can do is agree, that is what the joke is. If they disagree at that point they’re just trying to start a funny argument.

Scott Metzger’s The Bent Pinky for the 14th sees the return of anthropomorphic numerals humor. I’m a bit surprised Metzger goes so far as to make every numeral either a 3 or a 9. I’d have expected a couple of 2’s and 4’s. I understand not wanting to get into two-digit numbers. The premise of anthropomorphic numerals is troublesome if you need multiple-digit numbers.

Jon Rosenberg’s Goats for the 14th doesn’t directly mention a mathematical topic. But the story has the characters transported to a world with monkeys at typewriters. We know where that is. So we see that return after no time away, really.

Rick Detorie’s One Big Happy rerun for the 14th sees the return of “110 percent”. Happily the joke’s structured so that we can dodge arguing about whether it’s even possible to give 110 percent. I’m inclined to say of course it’s possible. “Giving 100 percent” in the context of playing a sport would mean giving the full reasonable effort. Or it does if we want to insist on idiomatic expressions making sense. It seems late to be insisting on that standard, but some people like it as an idea.

George Herriman’s Krazy Kat for the 22nd of December, 1938. Rerun the 15th of November, 2016. Really though who could sleep when they have a sweet adding machine like that to play with? Someone who noticed that that isn’t machine tape coming out the top, of course, but rather is the punch-cards for a band organ. Curiously low-dialect installment of the comic.

George Herriman’s Krazy Kat for the 22nd of December, 1938, was rerun on Tuesday. And it’s built on counting as a way of soothing the mind into restful sleep. Mathematics as a guide to sleep also appears, in minor form, in Darrin Bell’s Candorville for the 13th. I’m not sure why counting, or mental arithmetic, is able to soothe one into sleep. I suppose it’s just that it’s a task that’s engaging enough the semi-conscious mind can do it without having the emotional charge or complexity to wake someone up. I’ve taken to Collatz Conjecture problems, myself.

Terri Libenson’s Pajama Diaries for the 16th sees the return of Venn Diagram jokes. And it’s a properly-formed Venn Diagram, with the three circles coming together to indicate seven different conditions.

Terri Libenson’s Pajama Diaries for the 16th of November, 2016. I was never one for buying too much of the bakery aisle, myself, but then I also haven’t got teenagers. And I did go through so much of my life figuring there was no reason I shouldn’t eat another bagel again.

Gary Wise and Lance Aldrich’s Real Life Adventures for the 16th just name-drops rhomboids, using them as just a funny word. Geometry is filled with wonderful, funny-sounding words. I’m fond of “icosahedron” myself. But “rhomboid” and its related words are good ones. I think they hit that sweet spot between being uncommon in ordinary language without being so exotic that a reader’s eye trips over it. However funny a “triacontahedron” might be, no writer should expect the reader to forgive that pile of syllables. A rhomboid is a kind of parallelogram, so it’s got four sides. The sides come in two parallel pairs. Both members of a pair have the same length, but the different pairs don’t. They look like the kitchen tiles you’d get for a house you couldn’t really afford, not with tiling like that.

• sheldonk2014 12:08 am on Monday, 21 November, 2016 Permalink | Reply

Hey Joseph I just dropped by to see
How the other half live
Play any pinball lately
Sheldon

Like

• Joseph Nebus 11:03 pm on Friday, 25 November, 2016 Permalink | Reply

Aw thanks, and glad to see you around. I’ve been pretty well. Played a lot of pinball lately, but the schedule is letting up after a lot of busy weeks. Everyone’s more or less found their place in the state’s championship series and there’s only a few people who can change their positions usefully. Should be a calm month ahead.

Liked by 1 person

The End 2016 Mathematics A To Z: General Covariance

Today’s term is another request, and another of those that tests my ability to make something understandable. I’ll try anyway. The request comes from Elke Stangl, whose “Research Notes on Energy, Software, Life, the Universe, and Everything” blog I first ran across years ago, when she was explaining some dynamical systems work.

General Covariance

So, tensors. They’re the things mathematicians get into when they figure vectors just aren’t hard enough. Physics majors learn about them too. Electrical engineers really get into them. Some material science types too.

You maybe notice something about those last three groups. They’re interested in subjects that are about space. Like, just, regions of the universe. Material scientists wonder how pressure exerted on something will get transmitted. The structure of what’s in the space matters here. Electrical engineers wonder how electric and magnetic fields send energy in different directions. And physicists — well, everybody who’s ever read a pop science treatment of general relativity knows. There’s something about the shape of space something something gravity something equivalent acceleration.

So this gets us to tensors. Tensors are this mathematical structure. They’re about how stuff that starts in one direction gets transmitted into other directions. You can see how that’s got to have something to do with transmitting pressure through objects. It’s probably not too much work to figure how that’s relevant to energy moving through space. That it has something to do with space as just volume is harder to imagine. But physics types have talked about it quite casually for over a century now. Science fiction writers have been enthusiastic about it almost that long. So it’s kind of like the Roman Empire. It’s an idea we hear about early and often enough we’re never really introduced to it. It’s never a big new idea we’re presented, the way, like, you get specifically told there was (say) a War of 1812. We just soak up a couple bits we overhear about the idea and carry on as best our lives allow.

But to think of space. Start from somewhere. Imagine moving a little bit in one direction. How far have you moved? If you started out in this one direction, did you somehow end up in a different one? Now imagine moving in a different direction. Now how far are you from where you started? How far is your direction from where you might have imagined you’d be? Our intuition is built around a Euclidean space, or one close enough to Euclidean. These directions and distances and combined movements work as they would on a sheet of paper, or in our living room. But there is a difference. Walk a kilometer due east and then one due north and you will not be in exactly the same spot as if you had walked a kilometer due north and then one due east. Tensors are efficient ways to describe those little differences. And they tell us something of the shape of the Earth from knowing these differences. And they do it using much of the form that matrices and vectors do, so they’re not so hard to learn as they might be.

That’s all prelude. Here’s the next piece. We go looking at transformations. We take a perfectly good coordinate system and a point in it. Now let the light of the full Moon shine upon it, so that it shifts to being a coordinate werewolf. Look around you. There’s a tensor that describes how your coordinates look here. What is it?

You might wonder why we care about transformations. What was wrong with the coordinates we started with? But that’s because mathematicians have lumped a lot of stuff into the same name of “transformation”. A transformation might be something as dull as “sliding things over a little bit”. Or “turning things a bit”. It might be “letting a second of time pass”. Or “following the flow of whatever’s moving”. Stuff we’d like to know for physics work.

“General covariance” is a term that comes up when thinking about transformations. Suppose we have a description of some physics problem. By this mostly we mean “something moving in space” or “a bit of light moving in space”. That’s because they’re good building blocks. A lot of what we might want to know can be understood as some mix of those two problems.

Put your description through the same transformation your coordinate system had. This will (most of the time) change the details of how your problem’s represented. But does it change the overall description? Is our old description no longer even meaningful?

I trust at this point you’ve nodded and thought something like “well, that makes sense”. Give it another thought. How could we not have a “generally covariant” description of something? Coordinate systems are our impositions on a problem. We create them to make our lives easier. They’re real things in exactly the same way that lines of longitude and latitude are real. If we increased the number describing the longitude of every point in the world by 14, we wouldn’t change anything real about where stuff was or how to navigate to it. We couldn’t.

Here I admit I’m stumped. I can’t think of a good example of a system that would look good but not be generally covariant. I’m forced to resort to metaphors and analogies that make this essay particularly unsuitable to use for your thesis defense.

So here’s the thing. Longitude is a completely arbitrary thing. Measuring where you are east or west of some prime meridian might be universal, or easy for anyone to tumble onto. But the prime meridian is a cultural choice. It’s changed before. It may change again. Indeed, Geographic Information Services people still work with many different prime meridians. Most of them are for specialized purposes. Stuff like mapping New Jersey in feet north and east of some reference, for which Greenwich would make the numbers too ugly. But if our planet is mapped in an alien’s records, that map has at its center some line almost surely not Greenwich.

But latitude? Latitude is, at least, less arbitrary. That we measure it from zero to ninety degrees, north or south, is a cultural choice. (Or from -90 to 90 degrees. Same thing.) But that there’s a north pole an a south pole? That’s true as long as the planet is rotating. And that’s forced on us. If we tried to describe the Earth as rotating on an axis between Paris and Mexico City, we would … be fighting an uphill struggle, at least. It’s hard to see any problem that might make easier, apart from getting between Paris and Mexico City.

In models of the laws of physics we don’t really care about the north or south pole. A planet might have them or might not. But it has got some privileged stuff that just has to be so. We can’t have stuff that makes the speed of light in a vacuum change. And we have to make sense of a block of space that hasn’t got anything in it, no matter, no light, no energy, no gravity. I think those are the important pieces actually. But I’ll defer, growling angrily, to an expert in general relativity or non-Euclidean coordinates if I’ve misunderstood.

It’s often put that “general covariance” is one of the requirements for a scheme to describe General Relativity. I shall risk sounding like I’m making a joke and say that depends on your perspective. One can use different philosophical bases for describing General Relativity. In some of them you can see general covariance as a result rather than use it as a basic assumption. Here’s a 1993 paper by Dr John D Norton that describes some of the different ways to understand the point of general covariance.

By the way the term “general covariance” comes from two pieces. The “covariance” is because it describes how changes in one coordinate system are reflected in another. It’s “general” because we talk about coordinate transformations without knowing much about them. That is, we’re talking about transformations in general, instead of some specific case that’s easy to work with. This is why the mathematics of this can be frightfully tricky; we don’t know much about the transformations we’re working with. For a parallel, it’s easy to tell someone how to divide 14 into 112. It’s harder to tell them how to divide absolutely any number into absolutely any other number.

Quite a bit of mathematical physics plays into geometry. Gravity physicists mostly see as a problem of geometry. People who like reading up on science take that as given too. But many problems can be understood as a point or a blob of points in some kind of space, and how that point moves or that blob evolves in time. We don’t see “general covariance” in these other fields exactly. But we do see things that resemble it. It’s an idea with considerable reach.

I’m not sure how I feel about this. For most of my essays I’ve kept away from equations, even for the Why Stuff Can Orbit sequence. But this is one of those subjects it’s hard to be exact about without equations. I might revisit this in a special all-symbols, calculus-included, edition. Depends what my schedule looks like.

• elkement (Elke Stangl) 7:03 pm on Wednesday, 16 November, 2016 Permalink | Reply

Thanks for accepting this challenge – I think you explained it as good as one possibly can without equations!!

I think for understanding General Relativity you have to revisit some ideas from ‘flat space’ tensor calculus you took for granted, like a vector being sort of an arrow that can be moved around carelessly in space or what a coordinate transformation actually means (when applied to curved space). It seems GR is introduced either very formally, not to raise any false intuition, explaining the abstract big machinery with differentiable manifolds and atlases etc. and adding the actual physics as late as possible, or by starting from flat space metrics, staying close to ‘tangible physics’ and adding unfamiliar stuff slowly.
Sometimes I wonder if one (when trying to explain this to a freshman) could skip the ‘flat space’ part and start with the seemingly abstract but more general foundations as those cover anything? Perhaps it would be easier and more efficient never to learn about Gauss and Stokes theorem first but start with integration on manifolds and present such theorems as special cases?

And thanks for the pointer to this very interesting paper!

Liked by 1 person

• Joseph Nebus 3:47 am on Sunday, 20 November, 2016 Permalink | Reply

Thanks so for the kind words. I worried through the writing of this that I was going too far wrong and I admit I’m still waiting for a real expert to come along and destroy my essay and my spirits. Another few weeks and I should be far enough from writing it that I can take being told all the ways I’m wrong, though.

You’re right about the ways General Relativity seems to be often taught. And I also wonder if it couldn’t be better-taught starting from a completely abstract base and then filling in why this matches the way the world looks. Something equivalent to introducing vectors as “things that are in a vector space, which are things with these properties” instead of as arrows in space. I suspect it might not be really doable, though, based on how many times I crashed against covariant versus contravariant indices and that’s incredibly small stuff.

But there are so many oddball-perspective physics books out there that someone must have tried it at least once. And many of them are really good at least in making stuff look different, if not better. I’m sorry not to be skilled enough in the field to give it a fair try. Maybe some semester I’ll go through a proper text on this and post the notes I make on it.

Liked by 1 person

• elkement (Elke Stangl) 10:35 am on Sunday, 20 November, 2016 Permalink | Reply

I’ve recently stumbled upon this GR course http://www.infocobuild.com/education/audio-video-courses/physics/gravity-and-light-2015-we-heraeus.html : this lecturer is really very careful in introducing the foundations in the most abstract way, just as you say, without any intuitive references. No physics until lecture 9 (to prove that Newtonian gravity can also be presented in a generally covariant way – very interesting to read the history of science paper you linked in relation to this, BWT), then more math only, until finally in lecture 13 we return to ‘our’ spacetime.

I am also learning GR as a hobbyist project as this was not a mandatory subject in my physics degree program (I specialized in condensed matter, lasers, optics, superconductors…), and I admit I use mainly freely available sources like such lectures or detailed lecture notes. I have sort of planned to post about my favorite resources and/or that learning experience, too, but given my typical blogging frequency compared to yours I suppose I can wait for your postings and just use those as a reference :-)

Like

• Joseph Nebus 11:02 pm on Friday, 25 November, 2016 Permalink | Reply

Ooh, that’s a great-looking series, though I’ve lacked the time to watch it yet. I’ve regretted not taking a proper course on general relativity. When I was an undergraduate my physics department did a lecture series on general relativity without advanced mathematics, but it conflicted with something on my schedule and I hoped they’d rerun the series another semester. Of course they didn’t at least during my time there.

Liked by 1 person

Reading the Comics, October 29, 2016: Rerun Comics Edition

There were a couple of rerun comics in this week’s roundup, so I’ll go with that theme. And I’ll put in one more appeal for subjects for my End of 2016 Mathematics A To Z. Have a mathematics term you’d like to see me go on about? Just ask! Much of the alphabet is still available.

John Kovaleski’s Bo Nanas rerun the 24th is about probability. There’s something wondrous and strange that happens when we talk about the probability of things like birth days. They are, if they’re in the past, determined and fixed things. The current day is also a known, determined, fixed thing. But we do mean something when we say there’s a 1-in-365 (or 366, or 365.25 if you like) chance of today being your birthday. It seems to me this is probability based on ignorance. If you don’t know when my birthday is then your best guess is to suppose there’s a one-in-365 (or so) chance that it’s today. But I know when my birthday is; to me, with this information, the chance today is my birthday is either 0 or 1. But what are the chances that today is a day when the chance it’s my birthday is 1? At this point I realize I need much more training in the philosophy of mathematics, and the philosophy of probability. If someone is aware of a good introductory book about it, or a web site or blog that goes into these problems in a way a lay reader will understand, I’d love to hear of it.

I’ve featured this installment of Poor Richard’s Almanac before. I’ll surely feature it again. I like Richard Thompson’s sense of humor. The first panel mentions non-Euclidean geometry, using the connotation that it does have. Non-Euclidean geometries are treated as these magic things — more, these sinister magic things — that defy all reason. They can’t defy reason, of course. And at least some of them are even sensible if we imagine we’re drawing things on the surface of the Earth, or at least the surface of a balloon. (There are non-Euclidean geometries that don’t look like surfaces of spheres.) They don’t work exactly like the geometry of stuff we draw on paper, or the way we fit things in rooms. But they’re not magic, not most of them.

Stephen Bentley’s Herb and Jamaal for the 25th I believe is a rerun. I admit I’m not certain, but it feels like one. (Bentley runs a lot of unannounced reruns.) Anyway I’m refreshed to see a teacher giving a student permission to count on fingers if that’s what she needs to work out the problem. Sometimes we have to fall back on the non-elegant ways to get comfortable with a method.

Dave Whamond’s Reality Check for the 25th name-drops Einstein and one of the three equations that has any pop-culture currency.

Guy Gilchrist’s Today’s Dogg for the 27th is your basic mathematical-symbols joke. We need a certain number of these.

Berkeley Breathed’s Bloom County for the 28th is another rerun, from 1981. And it’s been featured here before too. As mentioned then, Milo is using calculus and logarithms correctly in his rather needless insult of Freida. 10,000 is a constant number, and as mentioned a few weeks back its derivative must be zero. Ten to the power of zero is 1. The log of 10, if we’re using logarithms base ten, is also 1. There are many kinds of logarithms but back in 1981, the default if someone said “log” would be the logarithm base ten. Today the default is more muddled; a normal person would mean the base-ten logarithm by “log”. A mathematician might mean the natural logarithm, base ‘e’, by “log”. But why would a normal person mention logarithms at all anymore?

Jef Mallett’s Frazz for the 28th is mostly a bit of wordplay on evens and odds. It’s marginal, but I do want to point out some comics that aren’t reruns in this batch.

Reading the Comics, October 22, 2016: The Jokes You Can Make About Fractions Edition

Last week had a whole bundle and a half of mathematically-themed comics so let me finish off the set. Also let me refresh my appeal for words for my End Of 2016 Mathematics A To Z. There’s all sorts of letters not yet claimed; please think of a mathematical term and request it!

David L Hoyt and Jeff Knurek’s Jumble for the 19th gives us a chance to do some word puzzle games again. If you like getting the big answer without doing the individual words then pay attention to the blackboard in the comic. Just saying.

David L Hoyt and Jeff Knurek’s Jumble for the 19th of October, 2016. The link will probably expire in about a month. Have to say, it’s not a big class. I’m not surprised the students are doing well.

Patrick J Marran’s Francis for the 20th features origami, as well as some of the more famous polyhedrons. The study of what shapes you can make from a flat sheet by origami processes — just folding, no cutting — is a neat one. Apparently origami geometry can be built out of seven axioms. I’m delighted to learn that the axioms were laid out as recently as 1992, with the exception of one that went unnoticed until 2002.

Gabby describes her shape as an isocahedron, which must be a typo. We all make them. There’s icosahedrons which look like that figure and I’ve certainly slipped consonants around that way.

I’m surprised and delighted to find there are ways to make an origami icosahedron. Her figure doesn’t look much like the origami icosahedron of those instructions, but there are many icosahedrons. The name just means there are 20 faces to the polyhedron so there’s a lot of room for variants.

If you were wondering, yes, the Francis of the title is meant to be the Pope. It’s kind of a Pope Francis fan comic. I cannot explain this phenomenon.

Rick Detorie’s One Big Happy rerun for the 21st retells one of the standard jokes you can always make about fractions. Fortunately it uses that only as part of the setup, which shows off why I’ve long liked Detorie’s work. Good cartoonists — good writers — take a stock joke and add something to make it fit their characters.

I’ve featured Richard Thompson’s Poor Richard’s Almanac rerun from the 21st before. I’ll surely feature it again. I just like Richard Thompson art like this. This is my dubious inclusion of the essay. In “What’s New At The Zoo” he tosses off a mention of chimpanzees now typing at 120 words per minute. A comic reference to the famous thought experiment of a monkey, or a hundred monkeys, or infinitely many monkeys given typewriters and time to write all the works of literature? Maybe. Or it might just be that it’s a funny idea. It is, of course.

Rick Kirkman and Jerry Scott’s Baby Blues for the 22nd of October, 2016. I’m not quite curious enough to look, but do wonder how far into the comments you have to go before someone slags on the Common Core. But then I would say if Hammy were to write down first an initial-impression guess of about what the answer should be — say, that “37 + 42” should be a number somewhere around 80 — and then an exact answer, then that would be consistent with what I understand Common Core techniques encourage and a pretty solid approach.

In Rick Kirkman and Jerry Scott’s Baby Blues for the 22nd Hammie offers multiple answers to each mathematics problem. “I like to increase my odds,” he says. For arithmetic problems, that’s not really helping. But it is often useful, especially in modeling complicated systems, to work out multiple answers. If you’re not sure how something should behave, and it’s troublesome to run experiments, then try develop several different models. If the models all describe similar behavior, then, good! It’s reason to believe you’re probably right, or at least close to right. If the models disagree about their conclusions then you need information. You need experimental results. The ways your models disagree can inspire new experiments.

Mark Leiknes’s Cow and Boy rerun for the 22nd is another with one of the standard jokes you can make about fractions. I suspect I’ve featured this before too, but I quite like Cow and Boy. It’s sad that the strip was cancelled, and couldn’t make a go of it as web comic. I’m not surprised; the strip had so many running jokes it might as well have had a deer and an orca shooting rocket-propelled grenades at new readers. But it’s grand seeing the many, many, many running jokes as they were first established. This is part of the sequence in which Billy, the Boy of the title, discovers there’s another kid named Billy in the class, quickly dubbed Smart Billy for reasons the strip makes clear.

Reading the Comics, October 19, 2016: An Extra Day Edition

I didn’t make noise about it, but last Sunday’s mathematics comic strip roundup was short one day. I was away from home and normal computer stuff Saturday. So I posted without that day’s strips under review. There was just the one, anyway.

Also I want to remind folks I’m doing another Mathematics A To Z, and taking requests for words to explain. There are many appealing letters still unclaimed, including ‘A’, ‘T’, and ‘O’. Please put requests in over on that page because. It’s easier for me to keep track of what’s been claimed that way.

Matt Janz’s Out of the Gene Pool rerun for the 15th missed last week’s cut. It does mention the Law of Cosines, which is what the Pythagorean Theorem looks like if you don’t have a right triangle. You still have to have a triangle. Bobby-Sue recites the formula correctly, if you know the notation. The formula’s $c^2 = a^2 + b^2 - 2 a b \cos\left(C\right)$. Here ‘a’ and ‘b’ and ‘c’ are the lengths of legs of the triangle. ‘C’, the capital letter, is the size of the angle opposite the leg with length ‘c’. That’s a common notation. ‘A’ would be the size of the angle opposite the leg with length ‘a’. ‘B’ is the size of the angle opposite the leg with length ‘b’. The Law of Cosines is a generalization of the Pythagorean Theorem. It’s a result that tells us something like the original theorem but for cases the original theorem can’t cover. And if it happens to be a right triangle the Law of Cosines gives us back the original Pythagorean Theorem. In a right triangle C is the size of a right angle, and the cosine of that is 0.

That said Bobby-Sue is being fussy about the drawings. No geometrical drawing is ever perfectly right. The universe isn’t precise enough to let us draw a right triangle. Come to it we can’t even draw a triangle, not really. We’re meant to use these drawings to help us imagine the true, Platonic ideal, figure. We don’t always get there. Mock proofs, the kind of geometric puzzle showing something we know to be nonsense, rely on that. Give chalkboard art a break.

Samson’s Dark Side of the Horse for the 17th is the return of Horace-counting-sheep jokes. So we get a π joke. I’m amused, although I couldn’t sleep trying to remember digits of π out quite that far. I do better working out Collatz sequences.

Hilary Price’s Rhymes With Orange for the 19th at least shows the attempt to relieve mathematics anxiety. I’m sympathetic. It does seem like there should be ways to relieve this (or any other) anxiety, but finding which ones work, and which ones work best, is partly a mathematical problem. As often happens with Price’s comics I’m particularly tickled by the gag in the title panel.

Hilary Price’s Rhymes With Orange for the 19th of October, 2016. I don’t think there’s enough data given to solve the problem. But it’s a start at least. Start by making a note of it on your suspiciously large sheet of paper.

Norm Feuti’s Gil rerun for the 19th builds on the idea calculators are inherently cheating on arithmetic homework. I’m sympathetic to both sides here. If Gil just wants to know that his answers are right there’s not much reason not to use a calculator. But if Gil wants to know that he followed the right process then the calculator’s useless. By the right process I mean, well, the work to be done. Did he start out trying to calculate the right thing? Did he pick an appropriate process? Did he carry out all the steps in that process correctly? If he made mistakes on any of those he probably didn’t get to the right answer, but it’s not impossible that he would. Sometimes multiple errors conspire and cancel one another out. That may not hurt you with any one answer, but it does mean you aren’t doing the problem right and a future problem might not be so lucky.

Zach Weinersmith’s Saturday Morning Breakfast Cereal rerun for the 19th has God crashing a mathematics course to proclaim there’s a largest number. We can suppose there is such a thing. That’s how arithmetic modulo a number is done, for one. It can produce weird results in which stuff we just naturally rely on doesn’t work anymore. For example, in ordinary arithmetic we know that if one number times another equals zero, then either the first number or the second, or both, were zero. We use this in solving polynomials all the time. But in arithmetic modulo 8 (say), 4 times 2 is equal to 0.

And if we recklessly talk about “infinity” as a number then we get outright crazy results, some of them teased in Weinersmith’s comic. “Infinity plus one”, for example, is “infinity”. So is “infinity minus one”. If we do it right, “infinity minus infinity” is “infinity”, or maybe zero, or really any number you want. We can avoid these logical disasters — so far, anyway — by being careful. We have to understand that “infinity” is not a number, though we can use numbers growing infinitely large.

Induction, meanwhile, is a great, powerful, yet baffling form of proof. When it solves a problem it solves it beautifully. And easily, too, usually by doing something like testing two special cases. Maybe three. At least a couple special cases of whatever you want to know. But picking the cases, and setting them up so that the proof is valid, is not easy. There’s logical pitfalls and it is so hard to learn how to avoid them.

Jon Rosenberg’s Scenes from a Multiverse for the 19th plays on a wonderful paradox of randomness. Randomness is … well, unpredictable. If I tried to sell you a sequence of random numbers and they were ‘1, 2, 3, 4, 5, 6, 7’ you’d be suspicious at least. And yet, perfect randomness will sometimes produce patterns. If there were no little patches of order we’d have reason to suspect the randomness was faked. There is no reason that a message like “this monkey evolved naturally” couldn’t be encoded into a genome by chance. It may just be so unlikely we don’t buy it. The longer the patch of order the less likely it is. And yet, incredibly unlikely things do happen. The study of impossibly unlikely events is a good way to quickly break your brain, in case you need one.

Why Stuff Can Orbit, Part 6: Circles and Where To Find Them

Previously:

So now we can work out orbits. At least orbits for a central force problem. Those are ones where a particle — it’s easy to think of it as a planet — is pulled towards the center of the universe. How strong that pull is depends on some constants. But it only changes as the distance the planet is from the center changes.

What we’d like to know is whether there are circular orbits. By “we” I mean “mathematical physicists”. And I’m including you in that “we”. If you’re reading this far you’re at least interested in knowing how mathematical physicists think about stuff like this.

It’s easiest describing when these circular orbits exist if we start with the potential energy. That’s a function named ‘V’. We write it as ‘V(r)’ to show it’s an energy that changes as ‘r’ changes. By ‘r’ we mean the distance from the center of the universe. We’d use ‘d’ for that except we’re so used to thinking of distance from the center as ‘radius’. So ‘r’ seems more compelling. Sorry.

Besides the potential energy we need to know the angular momentum of the planet (or whatever it is) moving around the center. The amount of angular momentum is a number we call ‘L’. It might be positive, it might be negative. Also we need the planet’s mass, which we call ‘m’. The angular momentum and mass let us write a function called the effective potential energy, ‘Veff(r)’.

And we’ll need to take derivatives of ‘Veff(r)’. Fortunately that “How Differential Calculus Works” essay explains all the symbol-manipulation we need to get started. That part is calculus, but the easy part. We can just follow the rules already there. So here’s what we do:

• The planet (or whatever) can have a circular orbit around the center at any radius which makes the equation $\frac{dV_{eff}}{dr} = 0$ true.
• The circular orbit will be stable if the radius of its orbit makes the second derivative of the effective potential, $\frac{d^2V_{eff}}{dr^2}$, some number greater than zero.

We’re interested in stable orbits because usually unstable orbits are boring. They might exist but any little perturbation breaks them down. The mathematician, ordinarily, sees this as a useless solution except in how it describes different kinds of orbits. The physicist might point out that sometimes it can take a long time, possibly millions of years, before the perturbation becomes big enough to stand out. Indeed, it’s an open question whether our solar system is stable. While it seems to have gone millions of years without any planet changing its orbit very much we haven’t got the evidence to say it’s impossible that, say, Saturn will be kicked out of the solar system anytime soon. Or worse, that Earth might be. “Soon” here means geologically soon, like, in the next million years.

(If it takes so long for the instability to matter then the mathematician might allow that as “metastable”. There are a lot of interesting metastable systems. But right now, I don’t care.)

I realize now I didn’t explain the notation for the second derivative before. It looks funny because that’s just the best we can work out. In that fraction $\frac{d^2V_{eff}}{dr^2}$ the ‘d’ isn’t a number so we can’t cancel it out. And the superscript ‘2’ doesn’t mean squaring, at least not the way we square numbers. There’s a functional analysis essay in there somewhere. Again I’m sorry about this but there’s a lot of things mathematicians want to write out and sometimes we can’t find a way that avoids all confusion. Roll with it.

So that explains the whole thing clearly and easily and now nobody could be confused and yeah I know. If my Classical Mechanics professor left it at that we’d have open rebellion. Let’s do an example.

There are two and a half good examples. That is, they’re central force problems with answers we know. One is gravitation: we have a planet orbiting a star that’s at the origin. Another is springs: we have a mass that’s connected by a spring to the origin. And the half is electric: put a positive electric charge at the center and have a negative charge orbit that. The electric case is only half a problem because it’s the same as the gravitation problem except for what the constants involved are. Electric charges attract each other crazy way stronger than gravitational masses do. But that doesn’t change the work we do.

This is a lie. Electric charges accelerating, and just orbiting counts as accelerating, cause electromagnetic effects to happen. They give off light. That’s important, but it’s also complicated. I’m not going to deal with that.

I’m going to do the gravitation problem. After all, we know the answer! By Kepler’s something law, something something radius cubed something G M … something … squared … After all, we can look up the answer!

The potential energy for a planet orbiting a sun looks like this:

$V(r) = - G M m \frac{1}{r}$

Here ‘G’ is a constant, called the Gravitational Constant. It’s how strong gravity in the universe is. It’s not very strong. ‘M’ is the mass of the sun. ‘m’ is the mass of the planet. To make sense ‘M’ should be a lot bigger than ‘m’. ‘r’ is how far the planet is from the sun. And yes, that’s one-over-r, not one-over-r-squared. This is the potential energy of the planet being at a given distance from the sun. One-over-r-squared gives us how strong the force attracting the planet towards the sun is. Different thing. Related thing, but different thing. Just listing all these quantities one after the other means ‘multiply them together’, because mathematicians multiply things together a lot and get bored writing multiplication symbols all the time.

Now for the effective potential we need to toss in the angular momentum. That’s ‘L’. The effective potential energy will be:

$V_{eff}(r) = - G M m \frac{1}{r} + \frac{L^2}{2 m r^2}$

I’m going to rewrite this in a way that means the same thing, but that makes it easier to take derivatives. At least easier to me. You’re on your own. But here’s what looks easier to me:

$V_{eff}(r) = - G M m r^{-1} + \frac{L^2}{2 m} r^{-2}$

I like this because it makes every term here look like “some constant number times r to a power”. That’s easy to take the derivative of. Check back on that “How Differential Calculus Works” essay. The first derivative of this ‘Veff(r)’, taken with respect to ‘r’, looks like this:

$\frac{dV_{eff}}{dr} = -(-1) G M m r^{-2} -2\frac{L^2}{2m} r^{-3}$

We can tidy that up a little bit: -(-1) is another way of writing 1. The second term has two times something divided by 2. We don’t need to be that complicated. In fact, when I worked out my notes I went directly to this simpler form, because I wasn’t going to be thrown by that. I imagine I’ve got people reading along here who are watching these equations warily, if at all. They’re ready to bolt at the first sign of something terrible-looking. There’s nothing terrible-looking coming up. All we’re doing from this point on is really arithmetic. It’s multiplying or adding or otherwise moving around numbers to make the equation prettier. It happens we only know those numbers by cryptic names like ‘G’ or ‘L’ or ‘M’. You can go ahead and pretend they’re ‘4’ or ‘5’ or ‘7’ if you like. You know how to do the steps coming up.

So! We allegedly can have a circular orbit when this first derivative is equal to zero. What values of ‘r’ make true this equation?

$G M m r^{-2} - \frac{L^2}{m} r^{-3} = 0$

Not so helpful there. What we want is to have something like ‘r = (mathematics stuff here)’. We have to do some high school algebra moving-stuff-around to get that. So one thing we can do to get closer is add the quantity $\frac{L^2}{m} r^{-3}$ to both sides of this equation. This gets us:

$G M m r^{-2} = \frac{L^2}{m} r^{-3}$

Things are getting better. Now multiply both sides by the same number. Which number? r3. That’s because ‘r-3‘ times ‘r3‘ is going to equal 1, while ‘r-2‘ times ‘r3‘ will equal ‘r1‘, which normal people call ‘r’. I kid; normal people don’t think of such a thing at all, much less call it anything. But if they did, they’d call it ‘r’. We’ve got:

$G M m r = \frac{L^2}{m}$

And now we’re getting there! Divide both sides by whatever number ‘G M’ is, as long as it isn’t zero. And then we have our circular orbit! It’s at the radius

$r = \frac{L^2}{G M m^2}$

Very good. I’d even say pretty. It’s got all those capital letters and one little lowercase. Something squared in the numerator and the denominator. Aesthetically pleasant. Stinks a little that it doesn’t look like anything we remember from Kepler’s Laws once we’ve looked them up. We can fix that, though.

The key is the angular momentum ‘L’ there. I haven’t said anything about how that number relates to anything. It’s just been some constant of the universe. In a sense that’s fair enough. Angular momentum is conserved, exactly the same way energy is conserved, or the way linear momentum is conserved. Why not just let it be whatever number it happens to be?

(A note for people who skipped earlier essays: Angular momentum is not a number. It’s really a three-dimensional vector. But in a central force problem with just one planet moving around we aren’t doing any harm by pretending it’s just a number. We set it up so that the angular momentum is pointing directly out of, or directly into, the sheet of paper we pretend the planet’s orbiting in. Since we know the direction before we even start work, all we have to car about is the size. That’s the number I’m talking about.)

The angular momentum of a thing is its moment of inertia times its angular velocity. I’m glad to have cleared that up for you. The moment of inertia of a thing describes how easy it is to start it spinning, or stop it spinning, or change its spin. It’s a lot like inertia. What it is depends on the mass of the thing spinning, and how that mass is distributed, and what it’s spinning around. It’s the first part of physics that makes the student really have to know volume integrals.

We don’t have to know volume integrals. A single point mass spinning at a constant speed at a constant distance from the origin is the easy angular momentum to figure out. A mass ‘m’ at a fixed distance ‘r’ from the center of rotation moving at constant speed ‘v’ has an angular momentum of ‘m’ times ‘r’ times ‘v’.

So great; we’ve turned ‘L’ which we didn’t know into ‘m r v’, where we know ‘m’ and ‘r’ but don’t know ‘v’. We’re making progress, I promise. The planet’s tracing out a circle in some amount of time. It’s a circle with radius ‘r’. So it traces out a circle with perimeter ‘2 π r’. And it takes some amount of time to do that. Call that time ‘T’. So its speed will be the distance travelled divided by the time it takes to travel. That’s $\frac{2 \pi r}{T}$. Again we’ve changed one unknown number ‘L’ for another unknown number ‘T’. But at least ‘T’ is an easy familiar thing: it’s how long the orbit takes.

Let me show you how this helps. Start off with what ‘L’ is:

$L = m r v = m r \frac{2\pi r}{T} = 2\pi m \frac{r^2}{T}$

Now let’s put that into the equation I got eight paragraphs ago:

$r = \frac{L^2}{G M m^2}$

Remember that one? Now put what I just said ‘L’ was, in where ‘L’ shows up in that equation.

$r = \frac{\left(2\pi m \frac{r^2}{T}\right)^2}{G M m^2}$

I agree, this looks like a mess and possibly a disaster. It’s not so bad. Do some cleaning up on that numerator.

$r = \frac{4 \pi^2 m^2}{G M m^2} \frac{r^4}{T^2}$

That’s looking a lot better, isn’t it? We even have something we can divide out: the mass of the planet is just about to disappear. This sounds bizarre, but remember Kepler’s laws: the mass of the planet never figures into things. We may be on the right path yet.

$r = \frac{4 \pi^2}{G M} \frac{r^4}{T^2}$

OK. Now I’m going to multiply both sides by ‘T2‘ because that’ll get that out of the denominator. And I’ll divide both sides by ‘r’ so that I only have the radius of the circular orbit on one side of the equation. Here’s what we’ve got now:

$T^2 = \frac{4 \pi^2}{G M} r^3$

And hey! That looks really familiar. A circular orbit’s radius cubed is some multiple of the square of the orbit’s time. Yes. This looks right. At least it looks reasonable. Someone else can check if it’s right. I like the look of it.

So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about more and different … um …

I’d like to talk about the different … oh, dear. Yes. You’re going to ask about that, aren’t you?

Ugh. All right. I’ll do it.

How do we know this is a stable orbit? Well, it just is. If it weren’t the Earth wouldn’t have a Moon after all this. Heck, the Sun wouldn’t have an Earth. At least it wouldn’t have a Jupiter. If the solar system is unstable, Jupiter is probably the most stable part. But that isn’t convincing. I’ll do this right, though, and show what the second derivative tells us. It tells us this is too a stable orbit.

So. The thing we have to do is find the second derivative of the effective potential. This we do by taking the derivative of the first derivative. Then we have to evaluate this second derivative and see what value it has for the radius of our circular orbit. If that’s a positive number, then the orbit’s stable. If that’s a negative number, then the orbit’s not stable. This isn’t hard to do, but it isn’t going to look pretty.

First the pretty part, though. Here’s the first derivative of the effective potential:

$\frac{dV_{eff}}{dr} = G M m r^{-2} - \frac{L^2}{m} r^{-3}$

OK. So the derivative of this with respect to ‘r’ isn’t hard to evaluate again. This is again a function with a bunch of terms that are all a constant times r to a power. That’s the easiest sort of thing to differentiate that isn’t just something that never changes.

$\frac{d^2 V_{eff}}{dr^2} = -2 G M m r^{-3} - (-3)\frac{L^2}{m} r^{-4}$

Now the messy part. We need to work out what that line above is when our planet’s in our circular orbit. That circular orbit happens when $r = \frac{L^2}{G M m^2}$. So we have to substitute that mess in for ‘r’ wherever it appears in that above equation and you’re going to love this. Are you ready? It’s:

$-2 G M m \left(\frac{L^2}{G M m^2}\right)^{-3} + 3\frac{L^2}{m}\left(\frac{L^2}{G M m^2}\right)^{-4}$

This will get a bit easier promptly. That’s because something raised to a negative power is the same as its reciprocal raised to the positive of that power. So that terrible, terrible expression is the same as this terrible, terrible expression:

$-2 G M m \left(\frac{G M m^2}{L^2}\right)^3 + 3 \frac{L^2}{m}\left(\frac{G M m^2}{L^2}\right)^4$

Yes, yes, I know. Only thing to do is start hacking through all this because I promise it’s going to get better. Putting all those third- and fourth-powers into their parentheses turns this mess into:

$-2 G M m \frac{G^3 M^3 m^6}{L^6} + 3 \frac{L^2}{m} \frac{G^4 M^4 m^8}{L^8}$

Yes, my gut reaction when I see multiple things raised to the eighth power is to say I don’t want any part of this either. Hold on another line, though. Things are going to start cancelling out and getting shorter. Group all those things-to-powers together:

$-2 \frac{G^4 M^4 m^7}{L^6} + 3 \frac{G^4 M^4 m^7}{L^6}$

Oh. Well, now this is different. The second derivative of the effective potential, at this point, is the number

$\frac{G^4 M^4 m^7}{L^6}$

And I admit I don’t know what number that is. But here’s what I do know: ‘G’ is a positive number. ‘M’ is a positive number. ‘m’ is a positive number. ‘L’ might be positive or might be negative, but ‘L6‘ is a positive number either way. So this is a bunch of positive numbers multiplied and divided together.

So this second derivative what ever it is must be a positive number. And so this circular orbit is stable. Give the planet a little nudge and that’s all right. It’ll stay near its orbit. I’m sorry to put you through that but some people raised the, honestly, fair question.

So this is the process you’d use to start understanding orbits for your own arbitrary potential energy. You can find the equivalent of Kepler’s Third Law, the one connecting orbit times and orbit radiuses. And it isn’t really hard. You need to know enough calculus to differentiate one function, and then you need to be willing to do a pile of arithmetic on letters. It’s not actually hard. Next time I hope to talk about the other kinds of central forces that you might get. We only solved one problem here. We can solve way more than that.

• howardat58 6:18 pm on Friday, 21 October, 2016 Permalink | Reply

I love the chatty approach.

Like

• Joseph Nebus 5:03 am on Saturday, 22 October, 2016 Permalink | Reply

Thank you. I realized doing Theorem Thursdays over the summer that it was hard to avoid that voice, and then that it was fun writing in it. So eventually I do learn, sometimes.

Like

Why Stuff Can Orbit, Part 5: Why Physics Doesn’t Work And What To Do About It

Less way previously:

My title’s hyperbole, to the extent it isn’t clickbait. Of course physics works. By “work” I mean “model the physical world in useful ways”. If it didn’t work then we would call it “pure” mathematics instead. Mathematicians would study it for its beauty. Physicists would be left to fend for themselves. “Useful” I’ll say means “gives us something interesting to know”. “Interesting” I’ll say if you want to ask what that means then I think you’re stalling.

But what I mean is that Newtonian physics, the physics learned in high school, doesn’t work. Well, it works, in that if you set up a problem right and calculate right you get answers that are right. It’s just not efficient, for a lot of interesting problems. Don’t ask me about interesting again. I’ll just say the central-force problems from this series are interesting.

Newtonian, high school type, physics works fine. It shines when you have only a few things to keep track of. In this central force problem we have one object, a planet-or-something, that moves. And only one force, one that attracts the planet to or repels the planet from the center, the Origin. This is where we’d put the sun, in a planet-and-sun system. So that seems all right as far as things go.

It’s less good, though, if there’s constraints. If it’s not possible for the particle to move in any old direction, say. That doesn’t turn up here; we can imagine a planet heading in any direction relative to the sun. But it’s also less good if there’s a symmetry in what we’re studying. And in this case there is. The strength of the central force only changes based on how far the planet is from the origin. The direction only changes based on what direction the planet is relative to the origin. It’s a bit daft to bother with x’s and y’s and maybe even z’s when all we care about is the distance from the origin. That’s a number we’ve called ‘r’.

So this brings us to Lagrangian mechanics. This was developed in the 18th century by Joseph-Louis Lagrange. He’s another of those 18th century mathematicians-and-physicists with his name all over everything. Lagrangian mechanics are really, really good when there’s a couple variables that describe both what we’d like to observe about the system and its energy. That’s exactly what we have with central forces. Give me a central force, one that’s pointing directly toward or away from the origin, and that grows or shrinks as the radius changes. I can give you a potential energy function, V(r), that matches that force. Give me an angular momentum L for the planet to have, and I can give you an effective potential energy function, Veff(r). And that effective potential energy lets us describe how the coordinates change in time.

The method looks roundabout. It depends on two things. One is the coordinate you’re interested in, in this case, r. The other is how fast that coordinate changes in time. This we have a couple of ways of denoting. When working stuff out on paper that’s often done by putting a little dot above the letter. If you’re typing, dots-above-the-symbol are hard. So we mark it as a prime instead: r’. This works well until the web browser or the word processor assumes we want smart quotes and we already had the r’ in quote marks. At that point all hope of meaning is lost and we return to communicating by beating rocks with sticks. We live in an imperfect world.

What we get out of this is a setup that tells us how fast r’, how fast the coordinate we’re interested in changes in time, itself changes in time. If the coordinate we’re interested in is the ordinary old position of something, then this describes the rate of change of the velocity. In ordinary English we call that the acceleration. What makes this worthwhile is that the coordinate doesn’t have to be the position. It also doesn’t have to be all the information we need to describe the position. For the central force problem r here is just how far the planet is from the center. That tells us something about its position, but not everything. We don’t care about anything except how far the planet is from the center, not yet. So it’s fine we have a setup that doesn’t tell us about the stuff we don’t care about.

How fast r’ changes in time will be proportional to how fast the effective potential energy, Veff(r), changes with its coordinate. I so want to write “changes with position”, since these coordinates are usually the position. But they can be proxies for the position, or things only loosely related to the position. For an example that isn’t a central force, think about a spinning top. It spins, it wobbles, it might even dance across the table because don’t they all do that? The coordinates that most sensibly describe how it moves are about its rotation, though. What axes is it rotating around? How do those change in time? Those don’t have anything particular to do with where the top is. That’s all right. The mathematics works just fine.

A circular orbit is one where the radius doesn’t change in time. (I’ll look at non-circular orbits later on.) That is, the radius is not increasing and is not decreasing. If it isn’t getting bigger and it isn’t getting smaller, then it’s got to be staying the same. Not all higher mathematics is tricky. The radius of the orbit is the thing I’ve been calling r all this time. So this means that r’, how fast r is changing with time, has to be zero. Now a slightly tricky part.

How fast is r’, the rate at which r changes, changing? Well, r’ never changes. It’s always the same value. Anytime something is always the same value the rate of its change is zero. This sounds tricky. The tricky part is that it isn’t tricky. It’s coincidental that r’ is zero and the rate of change of r’ is zero, though. If r’ were any fixed, never-changing number, then the rate of change of r’ would be zero. It happens that we’re interested in times when r’ is zero.

So we’ll find circular orbits where the change in the effective potential energy, as r changes, is zero. There’s an easy-to-understand intuitive idea of where to find these points. Look at a plot of Veff and imagine this is a smooth track or the cross-section of a bowl or the landscaping of a hill. Imagine dropping a ball or a marble or a bearing or something small enough to roll in it. Where does it roll to a stop? That’s where the change is zero.

It’s too much bother to make a bowl or landscape a hill or whatnot for every problem we’re interested in. We might do it anyway. Mathematicians used to, to study problems that were too complicated to do by useful estimates. These were “analog computers”. They were big in the days before digital computers made it no big deal to simulate even complicated systems. We still need “analog computers” or models sometimes. That’s usually for problems that involve chaotic stuff like turbulent fluids. We call this stuff “wind tunnels” and the like. It’s all a matter of solving equations by building stuff.

We’re not working with problems that complicated. There isn’t the sort of chaos lurking in this problem that drives us to real-world stuff. We can find these equilibriums by working just with symbols instead.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r