Why Stuff Can Orbit, Part 5: Why Physics Doesn’t Work And What To Do About It

Less way previously:

My title’s hyperbole, to the extent it isn’t clickbait. Of course physics works. By “work” I mean “model the physical world in useful ways”. If it didn’t work then we would call it “pure” mathematics instead. Mathematicians would study it for its beauty. Physicists would be left to fend for themselves. “Useful” I’ll say means “gives us something interesting to know”. “Interesting” I’ll say if you want to ask what that means then I think you’re stalling.

But what I mean is that Newtonian physics, the physics learned in high school, doesn’t work. Well, it works, in that if you set up a problem right and calculate right you get answers that are right. It’s just not efficient, for a lot of interesting problems. Don’t ask me about interesting again. I’ll just say the central-force problems from this series are interesting.

Newtonian, high school type, physics works fine. It shines when you have only a few things to keep track of. In this central force problem we have one object, a planet-or-something, that moves. And only one force, one that attracts the planet to or repels the planet from the center, the Origin. This is where we’d put the sun, in a planet-and-sun system. So that seems all right as far as things go.

It’s less good, though, if there’s constraints. If it’s not possible for the particle to move in any old direction, say. That doesn’t turn up here; we can imagine a planet heading in any direction relative to the sun. But it’s also less good if there’s a symmetry in what we’re studying. And in this case there is. The strength of the central force only changes based on how far the planet is from the origin. The direction only changes based on what direction the planet is relative to the origin. It’s a bit daft to bother with x’s and y’s and maybe even z’s when all we care about is the distance from the origin. That’s a number we’ve called ‘r’.

So this brings us to Lagrangian mechanics. This was developed in the 18th century by Joseph-Louis Lagrange. He’s another of those 18th century mathematicians-and-physicists with his name all over everything. Lagrangian mechanics are really, really good when there’s a couple variables that describe both what we’d like to observe about the system and its energy. That’s exactly what we have with central forces. Give me a central force, one that’s pointing directly toward or away from the origin, and that grows or shrinks as the radius changes. I can give you a potential energy function, V(r), that matches that force. Give me an angular momentum L for the planet to have, and I can give you an effective potential energy function, Veff(r). And that effective potential energy lets us describe how the coordinates change in time.

The method looks roundabout. It depends on two things. One is the coordinate you’re interested in, in this case, r. The other is how fast that coordinate changes in time. This we have a couple of ways of denoting. When working stuff out on paper that’s often done by putting a little dot above the letter. If you’re typing, dots-above-the-symbol are hard. So we mark it as a prime instead: r’. This works well until the web browser or the word processor assumes we want smart quotes and we already had the r’ in quote marks. At that point all hope of meaning is lost and we return to communicating by beating rocks with sticks. We live in an imperfect world.

What we get out of this is a setup that tells us how fast r’, how fast the coordinate we’re interested in changes in time, itself changes in time. If the coordinate we’re interested in is the ordinary old position of something, then this describes the rate of change of the velocity. In ordinary English we call that the acceleration. What makes this worthwhile is that the coordinate doesn’t have to be the position. It also doesn’t have to be all the information we need to describe the position. For the central force problem r here is just how far the planet is from the center. That tells us something about its position, but not everything. We don’t care about anything except how far the planet is from the center, not yet. So it’s fine we have a setup that doesn’t tell us about the stuff we don’t care about.

How fast r’ changes in time will be proportional to how fast the effective potential energy, Veff(r), changes with its coordinate. I so want to write “changes with position”, since these coordinates are usually the position. But they can be proxies for the position, or things only loosely related to the position. For an example that isn’t a central force, think about a spinning top. It spins, it wobbles, it might even dance across the table because don’t they all do that? The coordinates that most sensibly describe how it moves are about its rotation, though. What axes is it rotating around? How do those change in time? Those don’t have anything particular to do with where the top is. That’s all right. The mathematics works just fine.

A circular orbit is one where the radius doesn’t change in time. (I’ll look at non-circular orbits later on.) That is, the radius is not increasing and is not decreasing. If it isn’t getting bigger and it isn’t getting smaller, then it’s got to be staying the same. Not all higher mathematics is tricky. The radius of the orbit is the thing I’ve been calling r all this time. So this means that r’, how fast r is changing with time, has to be zero. Now a slightly tricky part.

How fast is r’, the rate at which r changes, changing? Well, r’ never changes. It’s always the same value. Anytime something is always the same value the rate of its change is zero. This sounds tricky. The tricky part is that it isn’t tricky. It’s coincidental that r’ is zero and the rate of change of r’ is zero, though. If r’ were any fixed, never-changing number, then the rate of change of r’ would be zero. It happens that we’re interested in times when r’ is zero.

So we’ll find circular orbits where the change in the effective potential energy, as r changes, is zero. There’s an easy-to-understand intuitive idea of where to find these points. Look at a plot of Veff and imagine this is a smooth track or the cross-section of a bowl or the landscaping of a hill. Imagine dropping a ball or a marble or a bearing or something small enough to roll in it. Where does it roll to a stop? That’s where the change is zero.

It’s too much bother to make a bowl or landscape a hill or whatnot for every problem we’re interested in. We might do it anyway. Mathematicians used to, to study problems that were too complicated to do by useful estimates. These were “analog computers”. They were big in the days before digital computers made it no big deal to simulate even complicated systems. We still need “analog computers” or models sometimes. That’s usually for problems that involve chaotic stuff like turbulent fluids. We call this stuff “wind tunnels” and the like. It’s all a matter of solving equations by building stuff.

We’re not working with problems that complicated. There isn’t the sort of chaos lurking in this problem that drives us to real-world stuff. We can find these equilibriums by working just with symbols instead.

Reading the Comics, September 24, 2016: Infinities Happen Edition

I admit it’s a weak theme. But two of the comics this week give me reason to talk about infinitely large things and how the fact of being infinitely large affects the probability of something happening. That’s enough for a mid-September week of comics.

Kieran Meehan’s Pros and Cons for the 18th of September is a lottery problem. There’s a fun bit of mathematical philosophy behind it. Supposing that a lottery runs long enough without changing its rules, and that it does draw its numbers randomly, it does seem to follow that any valid set of numbers will come up eventually. At least, the probability is 1 that the pre-selected set of numbers will come up if the lottery runs long enough. But that doesn’t mean it’s assured. There’s not any law, physical or logical, compelling every set of numbers to come up. But that is exactly akin to tossing a coin fairly infinity many times and having it come up tails every single time. There’s no reason that can’t happen, but it can’t happen.

Kieran Meehan’s Pros and Cons for the 18th of September, 2016. I can’t say whether any of these are supposed to be the PowerBall number. (The comic strip’s title is a revision of its original, which more precisely described its gimmick but was harder to remember: A Lawyer, A Doctor, and a Cop.)

Leigh Rubin’s Rubes for the 19th name-drops chaos theory. It’s wordplay, as of course it is, since the mathematical chaos isn’t the confusion-and-panicky-disorder of the colloquial term. Mathematical chaos is about the bizarre idea that a system can follow exactly perfectly known rules, and yet still be impossible to predict. Henri Poincaré brought this disturbing possibility to mathematicians’ attention in the 1890s, in studying the question of whether the solar system is stable. But it lay mostly fallow until the 1960s when computers made it easy to work this out numerically and really see chaos unfold. The mathematician type in the drawing evokes Einstein without being too close to him, to my eye.

Allison Barrows’s PreTeena rerun of the 20th shows some motivated calculations. It’s always fun to see people getting excited over what a little multiplication can do. Multiplying a little change by a lot of chances is one of the ways to understanding integral calculus, and there’s much that’s thrilling in that. But cutting four hours a night of sleep is not a little thing and I wouldn’t advise it for anyone.

Jason Poland’s Robbie and Bobby for the 20th riffs on Jorge Luis Borges’s Library of Babel. It’s a great image, the idea of the library containing every book possible. And it’s good mathematics also; it’s a good way to probe one’s understanding of infinity and of probability. Probably logic, also. After all, grant that the index to the Library of Babel is a book, and therefore in the library somehow. How do you know you’ve found the index that hasn’t got any errors in it?

Ernie Bushmiller’s Nancy Classics for the 21st originally ran the 21st of September, 1949. It’s another example of arithmetic as a proof of intelligence. Routine example, although it’s crafted with the usual Bushmiller precision. Even the close-up, peering-into-your-soul image if Professor Stroodle in the second panel serves the joke; without it the stress on his wrinkled brow would be diffused. I can’t fault anyone not caring for the joke; it’s not much of one. But wow is the comic strip optimized to deliver it.

Thom Bluemel’s Birdbrains for the 23rd is also a mathematics-as-proof-of-intelligence strip, although this one name-drops calculus. It’s also a strip that probably would have played better had it come out before Blackfish got people asking unhappy questions about Sea World and other aquariums keeping large, deep-ocean animals. I would’ve thought Comic Strip Master Command to have sent an advisory out on the topic.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 23rd is, among other things, a guide for explaining the difference between speed and velocity. Speed’s a simple number, a scalar in the parlance. Velocity is (most often) a two- or three-dimensional vector, a speed in some particular direction. This has implications for understanding how things move, such as pedestrians.

• Joseph Nebus 6:00 pm on Saturday, 24 September, 2016 Permalink | Reply Tags: handedness, statistical mechanics ( 14 ), thermodynamics ( 26 )

I have been writing, albeit more slowly, this month. I’m also reading, also more slowly than usual. Here’s some things that caught my attention.

One is from Elke Stangl, of the Elkemental blog. “Re-Visiting Carnot’s Theorem” is about one of the centerpieces of thermodynamics. It’s about how much work you can possibly get out of an engine, and how much must be lost no matter how good your engineering is. Thermodynamics is the secret spine of modern physics. It was born of supremely practical problems, many of them related to railroads or factories. And it teaches how much solid information can be drawn about a system if we know nothing about the components of the system. Stangl also brings ASCII art back from its Usenet and Twitter homes. There’s just stuff that is best done as a text picture.

Meanwhile on the CarnotCycle blog Peter Mandel writes on “Le Châtelier’s principle”. This is related to the question of how temperatures affect chemical reactions: how fast they will be, how completely they’ll use the reagents. How a system that’s reached equilibrium will react to something that unsettles the equilibrium. We call that a perturbation. Mandel reviews the history of the principle, which hasn’t always been well-regarded, and explores why it might have gone under-appreciated for decades.

And lastly MathsByAGirl has published a couple of essays on spirals. Who doesn’t like them? Three-dimensional spirals, that is, helixes, have some obvious things to talk about. A big one is that there’s such a thing as handedness. The mirror image of a coil is not the same thing as the coil flipped around. This handedness has analogues and implications through chemistry and biology. Two-dimensional spirals, by contrast, don’t have handedness like that. But we’ve groups types of spirals into many different sorts, each with their own beauty. They’re worth looking at.

• elkement (Elke Stangl) 7:49 pm on Saturday, 24 September, 2016 Permalink | Reply

Thanks a lot for the link – I am honored! :-)

Like

• Joseph Nebus 5:58 pm on Monday, 26 September, 2016 Permalink | Reply

Aw, well, gosh. I’m not sure how much an honor me linking a page is, but I’m glad to do it.

Liked by 1 person

L’Hopital’s Rule Without End: Is That A Thing?

I was helping a friend learn L’Hôpital’s Rule. This is a Freshman Calculus thing. (A different one from last week, it happens. Folks are going back to school, I suppose.) The friend asked me a point I thought shouldn’t come up. I’m certain it won’t come up in the exam my friend was worried about, but I couldn’t swear it wouldn’t happen at all. So this is mostly a note to myself to think it over and figure out whether the trouble could come up. And also so this won’t be my most accessible post; I’m sorry for that, for folks who aren’t calculus-familiar.

L’Hôpital’s Rule is a way of evaluating the limit of one function divided by another, of f(x) divided by g(x). If the limit of $\frac{f(x)}{g(x)}$ has either the form of $\frac{0}{0}$ or $\frac{\infty}{\infty}$ then you’re not stuck. You can take the first derivative of the numerator and the denominator separately. The limit of $\frac{f'(x)}{g'(x)}$ if it exists will be the same value.

But it’s possible to have to do this several times over. I used the example of finding the limit, as x grows infinitely large, where f(x) = x2 and g(x) = ex. $\frac{x^2}{e^x}$ goes to $\frac{\infty}{\infty}$ as x grows infinitely large. The first derivatives, $\frac{2x}{e^x}$, also go to $\frac{\infty}{\infty}$. You have to repeat the process again, taking the first derivatives of numerator and denominator again. $\frac{2}{e^x}$ finally goes to 0 as x gets infinitely large. You might have to do this a bunch of times. If f(x) were x7 and g(x) again ex you’d properly need to do this seven times over. With experience you figure out you can skip some steps. Of course students don’t have the experience to know they can skip ahead to the punch line there, but that’s what the practice in homework is for.

Anyway, my friend asked whether it’s possible to get a pattern that always ends up with $\frac{0}{0}$ or $\frac{\infty}{\infty}$ and never breaks out of this. And that’s what’s got me stuck. I can think of a few patterns that would. Start out, for example, with f(x) = e3x and g(x) = e2x. Properly speaking, that would never end. You’d get an infinity-over-infinity pattern every derivative you took. Similarly, if you started with $f(x) = \frac{1}{x}$ and $g(x) = e^{-x}$ you’d never come to an end. As x got infinitely large both f(x) and g(x) would go to zero and all their derivatives would be zero over and over and over and over again.

But those are special cases. Anyone looking at what they were doing instead of just calculating would look at, say, $\frac{e^{3x}}{e^{2x}}$ and realize that’s the same as $e^x$ which falls out of the L’Hôpital’s Rule formulas. Or $\frac{\frac{1}{x}}{e^{-x}}$ would be the same as $\frac{e^x}{x}$ which is an infinity-over-infinity form. But it takes only one derivative to break out of the infinity-over-infinity pattern.

So I can construct examples that never break out of a zero-over-zero or an infinity-over-infinity pattern if you calculate without thinking. And calculating without thinking is a common problem students have. Arguably it’s the biggest problem mathematics students have. But what I wonder is, are there ratios that end up in an endless zero-over-zero or infinity-over-infinity pattern even if you do think it out?

And thus this note; I’d like to nag myself into thinking about that.

• John Quintanilla 8:37 pm on Thursday, 22 September, 2016 Permalink | Reply

How about $\lim_{x \to 0^+} \displaystyle \frac{x}{e^{-1/x}}$? Applying L’Hopital’s Rule once gives $\displaystyle \frac{x^2}{e^{-1/x}}$, then $\displaystyle \frac{2x^3}{e^{-1/x}}$, etc.

Liked by 2 people

• Joseph Nebus 4:04 am on Friday, 23 September, 2016 Permalink | Reply

I think you’ve got it, yes! Great eye.

Like

• Joseph Nebus 6:00 pm on Sunday, 18 September, 2016 Permalink | Reply Tags: animation ( 8 ), arithmetic ( 50 ), calculus ( 69 ), notation ( 16 ), pizza ( 2 ), symbols ( 10 ), topology ( 8 )

As though to reinforce how nothing was basically wrong, Comic Strip Master Command sent a normal number of mathematically themed comics around this past week. They bunched the strips up in the first half of the week, but that will happen. It was a fun set of strips in any event.

Rob Harrell’s Adam @ Home for the 11th tells of a teacher explaining division through violent means. I’m all for visualization tools and if we are going to use them, the more dramatic the better. But I suspect Mrs Clark’s students will end up confused about what exactly they’ve learned. If a doll is torn into five parts, is that communicating that one divided by five is five? If the students were supposed to identify the mass of the parts of the torn-up dolls as the result of dividing one by five, was that made clear to them? Maybe it was. But there’s always the risk in a dramatic presentation that the audience will misunderstand the point. The showier the drama the greater the risk, it seems to me. But I did only get the demonstration secondhand; who knows how well it was done?

Greg Cravens’ The Buckets for the 11th has the kid, Toby, struggling to turn a shirt backwards and inside-out without taking it off. As the commenters note this is the sort of problem we get into all the time in topology. The field is about what can we say about shapes when we don’t worry about distance? If all we know about a shape is the ways it’s connected, the number of holes it has, whether we can distinguish one side from another, what else can we conclude? I believe Gocomics.com commenter Mike is right: take one hand out the bottom of the shirt and slide it into the other sleeve from the outside end, and proceed from there. But I have not tried it myself. I haven’t yet started wearing long-sleeve shirts for the season.

Bill Amend’s FoxTrot for the 11th — a new strip — does a story problem featuring pizzas cut into some improbable numbers of slices. I don’t say it’s unrealistic someone might get this homework problem. Just that the story writer should really ask whether they’ve ever seen a pizza cut into sevenths. I have a faint memory of being served a pizza cut into tenths by same daft pizza shop, which implies fifths is at least possible. Sevenths I refuse, though.

Mark Tatulli’s Heart of the City for the 12th plays on the show-your-work directive many mathematics assignments carry. I like Heart’s showiness. But the point of showing your work is because nobody cares what (say) 224 divided by 14 is. What’s worth teaching is the ability to recognize what approaches are likely to solve what problems. What’s tested is whether someone can identify a way to solve the problem that’s likely to succeed, and whether that can be carried out successfully. This is why it’s always a good idea, if you are stumped on a problem, to write out how you think this problem should be solved. Writing out what you mean to do can clarify the steps you should take. And it can guide your instructor to whether you’re misunderstanding something fundamental, or whether you just missed something small, or whether you just had a bad day.

Norm Feuti’s Gil for the 12th, another rerun, has another fanciful depiction of showing your work. The teacher’s got a fair complaint in the note. We moved away from tally marks as a way to denote numbers for reasons. Twelve depictions of apples are harder to read than the number 12. And they’re terrible if we need to depict numbers like one-half or one-third. Might be an interesting side lesson in that.

Brian Basset’s Red and Rover for the 14th is a rerun and one I’ve mentioned in these parts before. I understand Red getting fired up to be an animator by the movie. It’s been a while since I watched Donald Duck in Mathmagic Land but my recollection is that while it was breathtaking and visually inventive it didn’t really get at mathematics. I mean, not at noticing interesting little oddities and working out whether they might be true always, or sometimes, or almost never. There is a lot of play in mathematics, especially in the exciting early stages where one looks for a thing to prove. But it’s also in seeing how an ingenious method lets you get just what you wanted to know. I don’t know that the short demonstrates enough of that.

Bud Blake’s Tiger rerun for the 15th of September, 2016. I don’t get to talking about the art of the comics here, but, I quite like Julian’s expressions here. And Bud Blake drew fantastic rumpled clothes.

Bud Blake’s Tiger rerun for the 15th gives Punkinhead the chance to ask a question. And it’s a great question. I’m not sure what I’d say arithmetic is, not if I’m going to be careful. Offhand I’d say arithmetic is a set of rules we apply to a set of things we call numbers. The rules are mostly about how we can take two numbers and a rule and replace them with a single number. And these turn out to correspond uncannily well with the sorts of things we do with counting, combining, separating, and doing some other stuff with real-world objects. That it’s so useful is why, I believe, arithmetic and geometry were the first mathematics humans learned. But much of geometry we can see. We can look at objects and see how they fit together. Arithmetic we have to infer from the way the stuff we like to count works. And that’s probably why it’s harder to do when we start school.

What’s not good about that as an answer is that it actually applies to a lot of mathematical constructs, including those crazy exotic ones you sometimes see in science press. You know, the ones where there’s this impossibly complicated tangle with ribbons of every color and a headline about “It’s Revolutionary. It’s 46-Dimensional. It’s Breaking The Rules Of Geometry. Is It The Shape That Finally Quantizes Gravity?” or something like that. Well, describe a thing vaguely and it’ll match a lot of other things. But also when we look to new mathematical structures, we tend to look for things that resemble arithmetic. Group theory, for example, is one of the cornerstones of modern mathematical thought. It’s built around having a set of things on which we can do something that looks like addition. So it shouldn’t be a surprise that many groups have a passing resemblance to arithmetic. Mathematics may produce universal truths. But the ones we see are also ones we are readied to see by our common experience. Arithmetic is part of that common experience.

Jerry Scott and Jim Borgman’s Zits for the 14th of September, 2016. Properly speaking that is ink on his face, but I suppose saying it’s calculus pins down where it came from. Just observing.

Also Jerry Scott and Jim Borgman’s Zits for the 14th I think doesn’t really belong here. It’s just got a cameo appearance by the concept of mathematics. Dave Whamond’s Reality Check for the 17th similarly just mentions the subject. But I did want to reassure any readers worried after last week that Pierce recovered fine. Also that, you know, for not having a stomach for mathematics he’s doing well carrying on. Discipline will carry one far.

• ivasallay 3:44 am on Monday, 19 September, 2016 Permalink | Reply

You said, “Twelve depictions of apples are harder to read than the number 12.” It might be a little difficult to see at first, but the twelve apples were arranged to form the numerals 1 and 2. I thought it was rather clever.

Like

Dark Secrets of Mathematicians: Something About Integration By Parts

A friend took me up last night on my offer to help with any mathematics she was unsure about. I’m happy to do it, though of course it came as I was trying to shut things down for bed. But that part was inevitable and besides the exam was today. I thought it worth sharing here, though.

There’s going to be some calculus in this. There’s no avoiding that. If you don’t know calculus, relax about what the symbols exactly mean. It’s a good trick. Pretend anything you don’t know is just a symbol for “some mathematics thingy I can find out about later, if I need to, and I don’t necessarily have to”.

“Integration by parts” is one of the standard tricks mathematicians learn in calculus. It comes in handy if you want to integrate a function that itself looks like the product of two other functions. You find the integral of a function by breaking it up into two parts, one of which you differentiate and one of which you integrate. This gives you a product of functions and then a new integral to do. A product of functions is easy to deal with. The new integral … well, if you’re lucky, it’s an easier integral than you started with.

As you learn integration by parts you learn to look ways to break up functions so the new integral is easier. There’s no hard and fast rule for this. But bet on “the part that has a polynomial in it” as the part that’s better differentiated. “The part that has sines and cosines in it” is probably the part that’s better integrated. An exponential, like 2x, is as easily differentiated as integrated. The exponential of a function, like say 2x2, is better differentiated. These usually turn out impossible to integrate anyway. At least impossible without using crazy exotic functions.

So your classic integration-by-parts problem gives you an expression like this:

$\int x \sin(x) dx = -x \cos(x) - \int \sin(x) dx$

If you weren’t a mathematics major that might not look better to you, what with it still having integrals and sines and stuff in it. But ask your mathematics friend. She’ll tell you. The thing on the right-hand side is way better. That last term, the integral of the sine of x? She can do that in her sleep. It barely counts as work, at least by the time you’ve got in class to doing integration by parts. It’ll be $-x\cos(x) + \cos(x)$.

But sometimes, especially if the function being integrated — the “integrand”, by the way, and good luck playing that in Scrabble — is a bunch of trig functions and exponentials, you get some sad situation like so:

$\int \sin(x) \cos(x) dx = \sin^2(x) - \int \sin(x) \cos(x) dx$

That is, the thing we wanted to integrate, on the left, turns up on the right too. The student sits down, feeling the futility of modern existence. We’re stuck with the original problem all over again and we’re short of tools to do something about it.

This is the point my friend was confused by, and is the bit of dark magic I want to talk about here. We’re not stumped! We can fall back on one of those mathematics tricks we are always allowed to do. And it’s a trick that’s so simple it seems like it can’t do anything.

It’s substitution. We are always allowed to substitute one thing for something else that’s equal to it. So in that above equation, what can we substitute, and for what? … Well, nothing in that particular bunch of symbols. We’re going to introduce a new one. It’s going to be the value of the integral we want to evaluate. Since it’s an integral, I’m going to call it ‘I’. You don’t have to call it that, but you’re going to anyway. It doesn’t need a more thoughtful name.

So I shall define:

$I \equiv \int \sin(x) \cos(x) dx$

The triple-equals-sign there is an extravagance, I admit. But it’s a common one. Mathematicians use it to say “this is defined to be equal to that”. Granted, that’s what the = sign means. But the triple sign connotes how we emphasize the definition part. That is, ‘I’ might have been anything at all, and we choose this of the universe of possibilities.

How does this help anything? Well, it turns the integration-by-parts problem into this equation:

$I = \sin^2(x) - I$

And we want to know what ‘I’ equals. And now suddenly it’s easier to see that we don’t actually have to do any calculus from here on out. We can solve it the way we’d solve any problem in high school algebra, which is, move ‘I’ to the other side. Formally, we add the same thing to the left- and the right-hand sides. That’s ‘I’ …

$2I = \sin^2(x)$

… and then divide both sides by the same number, 2 …

$I = \frac{1}{2}\sin^2(x)$

And now remember that substitution is a free action. We can do it whenever we like, and we can undo it whenever we like. This is a good time to undo it. Putting the whole expression back in for ‘I’ we get …

$\int \sin(x) \cos(x) dx = \frac{1}{2}\sin^2(x)$

… which is the integral, evaluated.

(Someone would like to point out there should be a ‘plus C’ in there. This someone is right, for reasons that would take me too far afield to describe right now. We can overlook it for now anyway. I just want that someone to know I know what you’re thinking and you’re not catching me on this one.)

Sometimes, the integration by parts will need two or even three rounds before you get back the original integrand. This is because the instructor has chosen a particularly nasty problem for homework or the exam. It is not hopeless! But you will see strange constructs like 4/5 I equalling something. Carry on.

What makes this a bit of dark magic? I think it’s because of habits. We write down something simple on the left-hand side of an equation. We get an expression for what the right-hand side should be, and it’s usually complicated. And then we try making the right-hand side simpler and simpler. The left-hand side started simple so we never even touch it again. Indeed, working out something like this it’s common to write the left-hand side once, at the top of the page, and then never again. We just write an equals sign, underneath the previous line’s equals sign, and stuff on the right. We forget the left-hand side is there, and that we can do stuff with it and to it.

I think also we get into a habit of thinking an integral and integrand and all that is some quasi-magic mathematical construct. But it isn’t. It’s just a function. It may even be just a number. We don’t know what it is, but it will follow all the same rules of numbers, or functions. Moving it around may be more writing but it’s not different work to moving ‘4’ or ‘x2‘ around. That’s the value of replacing the integral with a symbol like ‘I’. It’s not that there’s something we can do with ‘I’ that we can’t do with ‘$\int \sin(x)\cos(x) dx$‘, other than write it in under four pen strokes. It’s that in algebra we learned the habit of moving a letter around to where it’s convenient. Moving a whole integral expression around seems different.

But it isn’t. It’s the same work done, just on a different kind of mathematics. I suspect finding out that it could be a trick that simple throws people off.

• elkement (Elke Stangl) 8:37 am on Sunday, 18 September, 2016 Permalink | Reply

Ha – spotted that immediately! Finally all that sloppy calculus you do as a physicist has paid off :-) You are not afraid using huge integrals ‘just as a number’, e.g. in a series or as an exponent … always silently assuming that functions are well behaved, converge or whatever terms you mathematicians use for that! ;-)
That trick is used over and over in quantum physics, in perturbation theories, when you turn a differential equation into an integral equation, and then just use the first few summands if, typically, a potential / perturbation is small (as operators would be defined recursively in the original equation and you finally want the true solution to be defined in relation to the undisturbed solution). One challenge is to disentangle double integrals by ‘time-ordering’ so that a double integral becomes just the square of two integrals times a factor.

Like

• Joseph Nebus 2:50 am on Monday, 19 September, 2016 Permalink | Reply

Well, I must admit, I went to grad school in a program with a very strong applied mathematics tradition. (The joke around the department was that it had two tracks, Applied Mathematics and More Applied Mathematics.) It definitely helped my getting used to thinking of a definite integral as just a number, that could be manipulated or moved around as needed. An indefinite integral … well, it’s not properly a number, but it might as well be for this context.

(I was considering explaining the differences between definite and indefinite integrals, but that seemed a little too far a diversion and too confusing a one. Might make that a separate post sometime when I need to fill out a slow week.)

Liked by 1 person

Reading the Comics, September 10, 2016: Finishing The First Week Of School Edition

I understand in places in the United States last week wasn’t the first week of school. It was the second or third or even worse. These places are crazy, in that they do things differently from the way my elementary school did it. So, now, here’s the other half of last week’s comics.

Zach Weinersmith’s Saturday Morning Breakfast Cereal presented the 8th is a little freak-out about existence. Mathematicians rely on the word “exists”. We suppose things to exist. We draw conclusions about other things that do exist or do not exist. And these things that exist are not things that exist. It’s a bit heady to realize nobody can point to, or trap in a box, or even draw a line around “3”. We can at best talk about stuff that expresses some property of three-ness. We talk about things like “triangles” and we even draw and use representations of them. But those drawings we make aren’t Triangles, the thing mathematicians mean by the concept. They’re at best cartoons, little training wheels to help us get the idea down. Here I regret that as an undergraudate I didn’t take philosophy courses that challenged me. It seems certain to me mathematicians are using some notion of the Platonic Ideal when we speak of things “existing”. But what does that mean, to a mathematician, to a philosopher, and to the person who needs an attractive tile pattern on the floor?

Cathy Thorne’s Everyday People Cartoons for the 9th is about another bit of the philosophy of mathematics. What are the chances of something that did happen? What does it mean to talk about the chance of something happening? When introducing probability mathematicians like to set it up as “imagine this experiment, which has a bunch of possible outcomes. One of them will happen and the other possibilities will not” and we go on to define a probability from that. That seems reasonable, perhaps because we’re accepting ignorance. We may know (say) that a coin toss is, in principle, perfectly deterministic. If we knew exactly how the coin is made. If we knew exactly how it is tossed. If we knew exactly how the air currents would move during its fall. If we knew exactly what the surface it might bounce off before coming to rest is like. Instead we pretend all this knowable stuff is not, and call the result unpredictability.

But about events in the past? We can imagine them coming out differently. But the imagination crashes hard when we try to say why they would. If we gave the exact same coin the exact same toss in the exact same circumstances how could it land on anything but the exact same face? In which case how can there have been any outcome other than what did happen? Yes, I know, someone wants to rush in and say “Quantum!” Say back to that person, “waveform collapse” and wait for a clear explanation of what exactly that is. There are things we understand poorly about the transition between the future and the past. The language of probability is a reminder of this.

Hilary Price’s Rhymes With Orange for the 10th uses the classic story-problem setup of a train leaving the station. It does make me wonder how far back this story setup goes, and what they did before trains were common. Horse-drawn carriages leaving stations, I suppose, or maybe ships at sea. I quite like the teaser joke in the first panel more.

Hilary Price’s Rhymes With Orange for the 10th of September, 2016. 70 mph? Why not some nice easy number like 60 mph instead? God must really be testing.

Tom Toles’s Randolph Itch, 2 am rerun for the 10th is an Einstein The Genius comic. It felt familiar to me, but I don’t seem to have included it in previous Reading The Comics posts. Perhaps I noticed it some week that I figured a mere appearance of Einstein didn’t rate inclusion. Randolph certainly fell asleep while reading about mathematics, though.

It’s popular to tell tales of Einstein not being a very good student, and of not being that good in mathematics. It’s easy to see why. We’d all like to feel a little more like a superlative mind such as that. And Einstein worked hard to develop an image of being accessible and personable. It fits with the charming absent-minded professor image everybody but forgetful professors loves. It feels dramatically right that Einstein should struggle with arithmetic like so many of us do. It’s nonsense, though. When Einstein struggled with mathematics, it was on the edge of known mathematics. He needed advice and consultations for the non-Euclidean geometries core to general relativity? Who doesn’t? I can barely make my way through the basic notation.

Anyway, it’s pleasant to see Toles holding up Einstein for his amazing mathematical prowess. It was a true thing.

Reading the Comics, September 6, 2016: Oh Thank Goodness We’re Back Edition

That’s a relief. After the previous week’s suspicious silence Comic Strip Master Command sent a healthy number of mathematically-themed comics my way. They cover a pretty normal spread of topics. So this makes for a nice normal sort of roundup.

Mac King and Bill King’s Magic In A Minute for the 4th is an arithmetic-magic-trick. Like most arithmetic-magic it depends on some true but, to me, dull bit of mathematics. In this case, that 81,234,567 minus 12,345,678 is equal to something. As a kid this sort of trick never impressed me because, well, anyone can do subtraction. I didn’t appreciate that the fun of stage magic in presenting well the mundane.

Jerry Scott and Jim Borgman’s Zits for the 5th is an ordinary mathematics-is-hard joke. But it’s elevated by the artwork, which shows off the expressive and slightly surreal style that makes the comic so reliable and popular. The formulas look fair enough, the sorts of things someone might’ve been cramming before class. If they’re a bit jumbled up, well, Pierce hasn’t been well.

Jerry Scott and Jim Borgman’s Zits for the 5th of September, 2016. It sure looks to me like there’s more things being explicitly multiplied by ‘1’ than are needed, but it might be the formulas got a little scrambled as Pierce vomited. We’ve all been there. Fun fact: apart from a bit in Calculus I where they drill you on differentiation formulas you never really need the secant. It makes a couple formulas a little more compact and that’s it, so if it’s been nagging at your mind go ahead and forget it.

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 6th is an anthropomorphic-shapes joke and I feel like it’s been here before. Ah, yeah, there it is, from about this time last year. It’s a fair one to rerun.

Mustard and Boloney popped back in on the 8th with a strip I don’t have in my archive at least. It’s your standard Pi Pun, though. If they’re smart they’ll rerun it in March. I like the coloring; it’s at least a pleasant panel to look at.

Percy Crosby’s Skippy from the 9th of July, 1929 was rerun the 6th of September. It seems like a simple kid-saying-silly-stuff strip: what is the difference between the phone numbers Clinton 2651 and Clinton 2741 when they add to the same number? (And if Central knows what the number is why do they waste Skippy’s time correcting him? And why, 87 years later, does the phone yell at me for not guessing correctly whether I need the area code for a local number and whether I need to dial 1 before that?) But then who cares what the digits in a telephone number add to? What could that tell us about anything?

As phone numbers historically developed, the sum can’t tell us anything at all. But if we had designed telephone numbers correctly we could have made it … not impossible to dial a wrong number, but at least made it harder. This insight comes to us from information theory, which, to be fair, we have because telephone companies spent decades trying to work out solutions to problems like people dialing numbers wrong or signals getting garbled in the transmission. We can allow for error detection by schemes as simple as passing along, besides the numbers, the sum of the numbers. This can allow for the detection of a single error: had Skippy called for number 2641 instead of 2741 the problem would be known. But it’s helpless against two errors, calling for 2541 instead of 2741. But we could detect a second error by calculating some second term based on the number we wanted, and sending that along too.

By adding some more information, other modified sums of the digits we want, we can even start correcting errors. We understand the logic of this intuitively. When we repeat a message twice after sending it, we are trusting that even if one copy of the message is garbled the recipient will take the version received twice as more likely what’s meant. We can design subtler schemes, ones that don’t require we repeat the number three times over. But that should convince you that we can do it.

The tradeoff is obvious. We have to say more digits of the number we want. It isn’t hard to reach the point we’re ending more error-detecting and error-correcting numbers than we are numbers we want. And what if we make a mistake in the error-correcting numbers? (If we used a smart enough scheme, we can work out the error was in the error-correcting number, and relax.) If it’s important that we get the message through, we shrug and accept this. If there’s no real harm done in getting the message wrong — if we can shrug off the problem of accidentally getting the wrong phone number — then we don’t worry about making a mistake.

And at this point we’re only a few days into the week. I have enough hundreds of words on the close of the week I’ll put off posting that a couple of days. It’s quite good having the comics back to normal.

Why Stuff Can Orbit, Part 4: On The L

Less way previously:

We were chatting about central forces. In these a small object — a satellite, a planet, a weight on a spring — is attracted to the center of the universe, called the origin. We’ve been studying this by looking at potential energy, a function that in this case depends only on how far the object is from the origin. But to find circular orbits, we can’t just look at the potential energy. We have to modify this potential energy to account for angular momentum. This essay I mean to discuss that angular momentum some.

Let me talk first about the potential energy. Mathematical physicists usually write this as a function named U or V. I’m using V. That’s what my professor used teaching this, back when I was an undergraduate several hundred thousand years ago. A central force, by definition, changes only with how far you are from the center. I’ve put the center at the origin, because I am not a madman. This lets me write the potential energy as V = V(r).

V(r) could, in principle, be anything. In practice, though, I am going to want it to be r raised to a power. That is, V(r) is equal to C rn. The ‘C’ here is a constant. It’s a scaling constant. The bigger a number it is the stronger the central force. The closer the number is to zero the weaker the force is. In standard units, gravity has a constant incredibly close to zero. This makes orbits very big things, which generally works out well for planets. In the mathematics of masses on springs, the constant is closer to middling little numbers like 1.

The ‘n’ here is a deceiver. It’s a constant number, yes, and it can be anything we want. But the use of ‘n’ as a symbol has connotations. Usually when a mathematician or a physicist writes ‘n’ it’s because she needs a whole number. Usually a positive whole number. Sometimes it’s negative. But we have a legitimate central force if ‘n’ is any real number: 2, -1, one-half, the square root of π, any of that is good. If you just write ‘n’ without explanation, the reader will probably think “integers”, possibly “counting numbers”. So it’s worth making explicit when this isn’t so. It’s bad form to surprise the reader with what kind of number you’re even talking about.

(Some number of essays on we’ll find out that the only values ‘n’ can have that are worth anything are -1, 2, and 7. And 7 isn’t all that good. But we aren’t supposed to know that yet.)

C rn isn’t the only kind of central force that could exist. Any function rule would do. But it’s enough. If we wanted a more complicated rule we could just add two, or three, or more potential energies together. This would give us $V(r) = C_1 r^{n_1} + C_2 r^{n_2}$, with C1 and C2 two possibly different numbers, and n1 and n2 two definitely different numbers. (If n1 and n2 were the same number then we should just add C1 and C2 together and stop using a more complicated expression than we need.) Remember that Newton’s Law of Motion about the sum of multiple forces being something vector something something direction? When we look at forces as potential energy functions, that law turns into just adding potential energies together. They’re well-behaved that way.

And if we can add these r-to-a-power potential energies together then we’ve got everything we need. Why? Polynomials. We can approximate most any potential energy that would actually happen with a big enough polynomial. Or at least a polynomial-like function. These r-to-a-power forces are a basis set for all the potential energies we’re likely to care about. Understand how to work with one and you understand how to work with them all.

Well, one exception. The logarithmic potential, V(r) = C log(r), is really interesting. And it has real-world applicability. It describes how strongly two vortices, two whirlpools, attract each other. You can write the logarithm as a polynomial. But logarithms are pretty well-behaved functions. You might be better off just doing that as a special case.

Still, at least to start with, we’ll stick with V(r) = C rn and you know what I mean by all those letters now. So I’m free to talk about angular momentum.

You’ve probably heard of momentum. It’s got something to do with movement, only sports teams and political campaigns are always gaining or losing it somehow. When we talk of that we’re talking of linear momentum. It describes how much mass is moving how fast in what direction. So it’s a vector, in three-dimensional space. Or two-dimensional space if you’re making the calculations easier. To find what the vector is, we make a list of every object that’s moving. We take its velocity — how fast it’s moving and in what direction — and multiply that by its mass. Mass is a single number, a scalar, and we’re always allowed to multiply a vector by a scalar. This gets us another vector. Once we’ve done that for everything that’s moving, we add all those product vectors together. We can always add vectors together. And this gives us a grand total vector, the linear momentum of the system.

And that’s conserved. If one part of the system starts moving slower it’s because other parts are moving faster, and vice-versa. In the real world momentum seems to evaporate. That’s because some of the stuff moving faster turns out to be air objects bumped into, or particles of the floor that get dragged along by friction, or other stuff we don’t care about. That momentum can seem to evaporate is what makes its use in talking about ports teams or political campaigns make sense. It also annoys people who want you to know they understand science words better than you. So please consider this my authorization to use “gaining” and “losing” momentum in this sense. Ignore complainers. They’re the people who complain the word “decimate” gets used to mean “destroy way more than ten percent of something”, even though that’s the least bad mutation of an English word’s meaning in three centuries.

Angular momentum is also a vector. It’s also conserved. We can calculate what that vector is by the same sort of process, that of calculating something on each object that’s spinning and adding it all up. In real applications it can seem to evaporate. But that’s also because the angular momentum is going into particles of air. Or it rubs off grease on the axle. Or it does other stuff we wish we didn’t have to deal with.

The calculation is a little harder to deal with. There’s three parts to a spinning thing. There’s the thing, and there’s how far it is from the axis it’s spinning around, and there’s how fast it’s spinning. So you need to know how fast it’s travelling in the direction perpendicular to the shortest line between the thing and the axis it’s spinning around. Its angular momentum is going to be as big as the mass times the distance from the axis times the perpendicular speed. It’s going to be pointing in whichever axis direction makes its movement counterclockwise. (Because that’s how physicists started working this out and it would be too much bother to change now.)

You might ask: wait, what about stuff like a wheel that’s spinning around its center? Or a ball being spun? That can’t be an angular momentum of zero? How do we work that out? The answer is: calculus. Also, we don’t need that. This central force problem I’ve framed so that we barely even need algebra for it.

See, we only have a single object that’s moving. That’s the planet or satellite or weight or whatever it is. It’s got some mass, the value of which we call ‘m’ because why make it any harder on ourselves. And it’s spinning around the origin. We’ve been using ‘r’ to mean the number describing how far it is from the origin. That’s the distance to the axis it’s spinning around. Its velocity — well, we don’t have any symbols to describe what that is yet. But you can imagine working that out. Or you trust that I have some clever mathematical-physics tool ready to introduce to work it out. I have, kind of. I’m going to ignore it altogether. For now.

The symbol we use for the total angular momentum in a system is $\vec{L}$. The little arrow above the symbol is one way to denote “this is a vector”. It’s a good scheme, what with arrows making people think of vectors and it being easy to write on a whiteboard. In books, sometimes, we make do just by putting the letter in boldface, L, which is easier for old-fashioned word processors to do. If we’re sure that the reader isn’t going to forget that L is this vector then we might stop highlighting the fact altogether. That’s even less work to do.

It’s going to be less work yet. Central force problems like this mean the object can move only in a two-dimensional plane. (If it didn’t, it wouldn’t conserve angular momentum: the direction of $\vec{L}$ would have to change. Sounds like magic, but trust me.) The angular momentum’s direction has to be perpendicular to that plane. If the object is spinning around on a sheet of paper, the angular momentum is pointing straight outward from the sheet of paper. It’s pointing toward you if the object is moving counterclockwise. It’s pointing away from you if the object is moving clockwise. What direction it’s pointing is locked in.

All we need to know is how big this angular momentum vector is, and whether it’s positive or negative. So we just care about this number. We can call it ‘L’, no arrow, no boldface, no nothing. It’s just a number, the same as is the mass ‘m’ or distance from the origin ‘r’ or any of our other variables.

If ‘L’ is zero, this means there’s no total angular momentum. This means the object can be moving directly out from the origin, or directly in. This is the only way that something can crash into the center. So if setting L to be zero doesn’t allow that then we know we did something wrong, somewhere. If ‘L’ isn’t zero, then the object can’t crash into the center. If it did we’d be losing angular momentum. The object’s mass times its distance from the center times its perpendicular speed would have to be some non-zero number, even when the distance was zero. We know better than to look for that.

You maybe wonder why we use ‘L’ of all letters for the angular momentum. I do. I don’t know. I haven’t found any sources that say why this letter. Linear momentum, which we represent with $\vec{p}$, I know. Or, well, I know the story every physicist says about it. p is the designated letter for linear momentum because we used to use the word “impetus”, as in “impulse”, to mean what we mean by momentum these days. And “p” is the first letter in “impetus” that isn’t needed for some more urgent purpose. (“m” is too good a fit for mass. “i” has to work both as an index and as that number which, squared, gives us -1. And for that matter, “e” we need for that exponentials stuff, and “t” is too good a fit for time.) That said, while everybody, everybody, repeats this, I don’t know the source. Perhaps it is true. I can imagine, say, Euler or Lagrange in their writing settling on “p” for momentum and everybody copying them. I just haven’t seen a primary citation showing this is so.

(I don’t mean to sound too unnecessarily suspicious. But just because everyone agrees on the impetus-thus-p story doesn’t mean it’s so. I mean, every Star Trek fan or space historian will tell you that the first space shuttle would have been named Constitution until the Trekkies wrote in and got it renamed Enterprise. But the actual primary documentation that the shuttle would have been named Constitution is weak to nonexistent. I’ve come to the conclusion NASA had no plan in mind to name space shuttles until the Trekkies wrote in and got one named. I’ve done less poking around the impetus-thus-p story, in that I’ve really done none, but I do want it on record that I would like more proof.)

Anyway, “p” for momentum is well-established. So I would guess that when mathematical physicists needed a symbol for angular momentum they looked for letters close to “p”. When you get into more advanced corners of physics “q” gets called on to be position a lot. (Momentum and position, it turns out, are nearly-identical-twins mathematically. So making their symbols p and q offers aesthetic charm. Also great danger if you make one little slip with the pen.) “r” is called on for “radius” a lot. Looking on, “t” is going to be time.

On the other side of the alphabet, well, “o” is just inviting danger. “n” we need to count stuff. “m” is mass or we’re crazy. “l” might have just been the nearest we could get to “p” without intruding on a more urgently-needed symbol. (“s” we use a lot for parameters like length of an arc that work kind of like time but aren’t time.) And then shift to the capital letter, I expect, because a lowercase l looks like a “1”, to everybody’s certain doom.

The modified potential energy, then, is going to include the angular momentum L. At least, the amount of angular momentum. It’s also going to include the mass of the object moving, and the radius r that says how far the object is from the center. It will be:

$V_{eff}(r) = V(r) + \frac{L^2}{2 m r^2}$

V(r) was the original potential, whatever that was. The modifying term, with this square of the angular momentum and all that, I kind of hope you’ll just accept on my word. The L2 means that whether the angular momentum is positive or negative, the potential will grow very large as the radius gets small. If it didn’t, there might not be orbits at all. And if the angular momentum is zero, then the effective potential is the same original potential that let stuff crash into the center.

For the sort of r-to-a-power potentials I’ve been looking at, I get an effective potential of:

$V_{eff}(r) = C r^n + \frac{L^2}{2 m r^2}$

where n might be an integer. I’m going to pretend a while longer that it might not be, though. C is certainly some number, maybe positive, maybe negative.

If you pick some values for C, n, L, and m you can sketch this out. If you just want a feel for how this Veff looks it doesn’t much matter what values you pick. Changing values just changes the scale, that is, where a circular orbit might happen. It doesn’t change whether it happens. Picking some arbitrary numbers is a good way to get a feel for how this sort of problem works. It’s good practice.

Sketching will convince you there are energy minimums, where we can get circular orbits. It won’t say where to find them without some trial-and-error or building a model of this energy and seeing where a ball bearing dropped into it rolls to a stop. We can do this more efficiently.

Reading the Comics, September 3, 2016: Summer Vacation Edition

I quite like doing Reading The Comics posts. I do feel sometimes like I’m repeating myself; how much is there to say about a comic where the student gives a snarky response to a story problem? Or where someone bakes a pie to talk about circles? And sometimes I worry that I’m slacking, since there’s not much to explain in what a spray of algebraic symbols mean, or would mean if they were perfectly rendered.

But I do like the feel of playing to an audience. Cartoonists call out topics and I do my best to say something interesting about them. It means I do not know whether I’ll be saying something about game theory or infinitely large sets or the history of numerals or the ability of birds to count in any given week. I have to be on top of a wide range of topics, or figure a way to get on top quickly. Some weeks it’ll be very busy; some weeks it’ll be quiet. It makes for fun, varied challenges.

This week Comic Strip Master Command sent me nothing. None of the comics I read, from Comics Kingdom, from Gocomics.com, and a couple of other miscellaneous things I read from long habit (like Joe Martin’s comics, or the Jumble puzzle), addressed any mathematics topics. I do not know the last time I had a subject drought like this. Certainly it’s been a while.

Mathematics has gotten a few cameos. Rick Stromoski’s Soup To Nutz almost got on point with a useful mnemonic for remembering which are odd and which are even numbers. Tony Rubino and Gary Markstein’s Daddy’s Home and Mark Tatulli’s Heart of the City both have “mathematics is so hard” as excuses for jokes. But that isn’t really about mathematics. Any subject people hated would do.

Comic strips work under an astounding set of constraints. They have to be incredibly compact, they have to carry their point in text and illustration, and the ones that appear in newspapers have to appeal to a broad audience in a way even television shows barely need to anymore. Given this, some stock jokes might well be essential. I couldn’t fault comic strip artists for using them. Similarly I don’t mind when a cartoonist uses a pile of scribbles for a mathematical concept, or even if they get an idea simplified to the point of being wrong. They’re amazing pieces of art to have at all. If I can make something educational of them that’s great, but that’s my adding to what they do.

So I’m just assuming Comic Strip Master Command wanted me to have a week off and that this doesn’t reflect any hard feelings between me and any cartoonists. We’ll know this time next week if there’s real trouble.

• ivasallay 1:57 am on Tuesday, 6 September, 2016 Permalink | Reply

The link to Heart of the City didn’t work. Here it is http://www.gocomics.com/heartofthecity/2016/09/02

Like

• Joseph Nebus 9:55 pm on Tuesday, 6 September, 2016 Permalink | Reply

You’re right! I don’t know how I lost the link. Thanks for spotting it.

Like

How August 2016 Treated My Mathematics Blog

August 2016 is not actually the month I gave up around here. It was one of my least-prolific months in a long while, though. It was personally a less preoccupied month than July was, but I think a lot of things I’d put off to keep projects like Theorem Thursdays going came back to demand attention and my writing flagged off. And there’s my usual slackness in going around to other blogs and paying visits and writing comments and all that. So let’s see just how bad my readership numbers were, according to WordPress. Just a second, let me look. I think I’m braced.

Huh. So my eleven posts in August drew 1,002 page views from 531 unique visitors here. That’s down from July’s 1,057 views from 585 visitors, and from June’s 1,099 views and 598 visitors. But July had 17 posts, and June 16, so the count of readers per post is way up. Well, if people like seeing me in lesser amounts, I guess that’s all right.

If they do. There were only 107 likes given to my posts in August, down from July’s 177 and June’s 155. That’s almost constant if we look at it per-post.

The number of comments collapsed. There were 16 in August, compared to 33 in July and 37 in June. That’s a good bit down per-post, too. I suspect it’ll pick up once the Why Stuff Can Orbit posts get going in earnest again.

Popular Posts:

I didn’t have as strongly popular posts this month. In July all the top-ten posts had at least thirty page views. In August it was a mere 19. But what was popular did reflect, I’d say, a good sample of the kind of stuff I write:

Listing Countries:

I think the listing of every country worked out last month. So here, let me do it again.

United States 674
Philippines 43
India 30
Germany 29
United Kingdom 21
Slovenia 20
Australia 15
Austria 15
France 11
Singapore 9
Sweden 7
United Arab Emirates 6
Brazil 5
South Africa 5
Indonesia 4
Puerto Rico 4
European Union 3
Malaysia 3
Portugal 3
Croatia 2
Japan 2
Mexico 2
New Zealand 2
Russia 2
Spain 2
Thailand 2
Vietnam 2
Bahrain 1
Belgium 1
Czech Republic 1
Denmark 1 (*)
Honduras 1
Ireland 1
Italy 1
Jamaica 1
Lithuania 1 (*)
Netherlands 1
Norway 1
South Korea 1
Sri Lanka 1
Switzerland 1
Turkey 1 (*)

Denmark, Lithuania, and Turkey were single-reader countries last month too. Nobody’s on a three-month streak. European Union has gone from two to three page views. Still not a country.

Search Term Non-Poetry:

That cryptic “origin is the gateway” thing is gone again. What isn’t gone?

• divergence and stokes theorem cartoons
• comics strips of james clerk maxwell (?)
• komiks arithmetic sequence in real life situation (??)
• stock theorem and divergence theorem cartoon
• segar bernice (a Popeye thing. Bernice the Whiffle Hen was part of the Thimble Theatre story by which cartoonist E C Segar discovered the best character he ever wrote)

Yeah, I know. Not much of anything.

The month started with my blog having 40,396 recorded page views — I missed whoever was number 40,000 — from some 16,614 recorded visitors. But my blog started before WordPress told us anything about unique visitors so who knows whether that means anything.

WordPress says I start September with 614 total followers, which isn’t very far up from the start of August’s 610. But it wasn’t a month were I did much to draw attention to myself. If you want to join me as a WordPress.com follower there ought to be a button in the upper-right corner, a bit below and to the right of my blog name and above the “Or Follow By Way Of RSS” tag. There’s also a Follow Blog Via Email option. And I’m on Twitter also, like so many people are these days.

WordPress says the most popular day for reading stuff here is Sunday, with 21 percent of page views last month. That seems reasonable; I’ve made Sunday the default day for Reading the Comics posts and haven’t had to skip a week yet. Sunday’s been the most popular day of the week for three months now. It says the most popular hour is 6 pm, with 12 percent of page views. It had been 3 pm in June and July. I’ve tended to set things to post at 6 pm Universal Time, so maybe this reflects people reading stuff just as I post it. That too seems like what we ought to expect. I don’t know why I get all suspicious of that.

• Ken Dowell 2:56 am on Saturday, 3 September, 2016 Permalink | Reply

August is a slow month for a lot of us. I posted about half the amount that I usually do.

Liked by 1 person

• Joseph Nebus 4:32 pm on Sunday, 4 September, 2016 Permalink | Reply

I was dazed my August altogether. It was a slow month for my writing and reading, but it somehow never really left me spare time. Well, I got some decent time in playing Roller Coaster Tycoon 3 for the first time in ages, but it wasn’t that much.

Liked by 1 person

• LFFL 11:00 pm on Monday, 5 September, 2016 Permalink | Reply

You’ve got a variety across the world there.

Liked by 1 person

• Joseph Nebus 9:59 pm on Tuesday, 6 September, 2016 Permalink | Reply

I don’t know which I’m more surprised by: that there are so many readers from countries that aren’t the United States, Canada, or United Kingdom or that there aren’t more. It makes sense that I should attract readers from English-speaking nations. But there’s English-speakers in every country and I don’t think that I write with such a strong cultural bias as to not make sense in (say) Kenya. Of course the nature of cultural bias is that it’s so hard to see it from within …

Liked by 1 person

• Joseph Nebus 6:00 pm on Wednesday, 31 August, 2016 Permalink | Reply Tags: basketball ( 14 ), fractals, hot hands, Julia Sets, Mandelbrot Sets, philosophy ( 18 ), sports ( 25 ), thinking

I’ve found a good way to procrastinate on the next essay in the Why Stuff Can Orbit series. (I’m considering explaining all of differential calculus, or as much as anyone really needs, to save myself a little work later on.) In the meanwhile, though, here’s some interesting reading that’s come to my attention the last few weeks and that you might procrastinate your own projects with. (Remember Benchley’s Principle!)

First is Jeremy Kun’s essay Habits of highly mathematical people. I think it’s right in describing some of the worldview mathematics training instills, or that encourage people to become mathematicians. It does seem to me, though, that most everything Kun describes is also true of philosophers. I’m less certain, but I strongly suspect, that it’s also true of lawyers. These concentrations all tend to encourage thinking about we mean by things, and to test those definitions by thought experiments. If we suppose this to be true, then what implications would it have? What would we have to conclude is also true? Does it include anything that would be absurd to say? And is are the results useful enough we can accept a bit of apparent absurdity?

New York magazine had an essay: Jesse Singal’s How Researchers Discovered the Basketball “Hot Hand”. The “Hot Hand” phenomenon is one every sports enthusiast, and most casual fans, know: sometimes someone is just playing really, really well. The problem has always been figuring out whether it exists. Do anything that isn’t a sure bet long enough and there will be streaks. There’ll be a stretch where it always happens; there’ll be a stretch where it never does. That’s how randomness works.

But it’s hard to show that. The messiness of the real world interferes. A chance of making a basketball shot is not some fixed thing over the course of a career, or over a season, or even over a game. Sometimes players do seem to be hot. Certainly anyone who plays anything competitively experiences a feeling of being in the zone, during which stuff seems to just keep going right. It’s hard to disbelieve something that you witness, even experience.

So the essay describes some of the challenges of this: coming up with a definition of a “hot hand”, for one. Coming up with a way to test whether a player has a hot hand. Seeing whether they’re observed in the historical record. Singal’s essay writes about some of the history of studying hot hands. There is a lot of probability, and of psychology, and of experimental design in it.

And then there’s this intriguing question Analysis Fact Of The Day linked to: did Gaston Julia ever see a computer-generated image of a Julia Set? There are many Julia Sets; they and their relative, the Mandelbrot Set, became trendy in the fractals boom of the 1980s. If you knew a mathematics major back then, there was at least one on her wall. It typically looks like a craggly, lightning-rimmed cloud. Its shapes are not easy to imagine. It’s almost designed for the computer to render. Gaston Julia died in March of 1978. Could he have seen a depiction?

It’s not clear. The linked discussion digs up early computer renderings. It also brings up an example of a late-19th-century hand-drawn depiction of a Julia-like set, and compares it to a modern digital rendition of the thing. Numerical simulation saves a lot of tedious work; but it’s always breathtaking to see how much can be done by reason.

• sheldonk2014 1:26 am on Wednesday, 28 September, 2016 Permalink | Reply

I just thought of one Joseph
How many stiches in an average size shirt

Like

Reading the Comics, August 27, 2016: Calm Before The Term Edition

Here in the United States schools are just lurching back into the mode where they have students come in and do stuff all day. Perhaps this is why it was a routine week. Comic Strip Master Command wants to save up a bunch of story problems for us. But here’s what the last seven days sent into my attention.

Jeff Harris’s Shortcuts educational feature for the 21st is about algebra. It’s got a fair enough blend of historical trivia and definitions and examples and jokes. I don’t remember running across the “number cruncher” joke before.

Mark Anderson’s Andertoons for the 23rd is your typical student-in-lecture joke. But I do sympathize with students not understanding when a symbol gets used for different meanings. It throws everyone. But sometimes the things important to note clearly in one section are different from the needs in another section. No amount of warning will clear things up for everybody, but we try anyway.

Tom Thaves’s Frank and Ernest for the 23rd tells a joke about collapsing wave functions, which is why you never see this comic in a newspaper but always see it on a physics teacher’s door. This is properly physics, specifically quantum mechanics. But it has mathematical import. The most practical model of quantum mechanics describes what state a system is in by something called a wave function. And we can turn this wave function into a probability distribution, which describes how likely the system is to be in each of its possible states. “Collapsing” the wave function is a somewhat mysterious and controversial practice. It comes about because if we know nothing about a system then it may have one of many possible values. If we observe, say, the position of something though, then we have one possible value. The wave functions before and after the observation are different. We call it collapsing, reflecting how a universe of possibilities collapsed into a mere fact. But it’s hard to find an explanation for what that is that’s philosophically and physically satisfying. This problem leads us to Schrödinger’s Cat, and to other challenges to our sense of how the world could make sense. So, if you want to make your mark here’s a good problem for you. It’s not going to be easy.

John Allison’s Bad Machinery for the 24th tosses off a panel full of mathematics symbols as proof of hard thinking. In other routine references John Deering’s Strange Brew for the 26th is just some talk about how hard fractions are.

While it’s outside the proper bounds of mathematics talk, Tom Toles’s Randolph Itch, 2 am for the 23rd is a delight. My favorite strip of this bunch. Should go on the syllabus.

Why Stuff Can Orbit, Part 3: It Turns Out Spinning Matters

Way previously:

Before the big distractions of Theorem Thursdays and competitive pinball events and all that I was writing up the mathematics of orbits. Last time I’d got to establishing that there can’t be such a thing as an orbit. This seems to disagree with what a lot of people say we can observe. So I want to resolve that problem. Yes, I’m aware I’m posting this on a Thursday, which I said I wasn’t going to do because it’s too hard on me to write. I don’t know how it worked out like that.

Let me get folks who didn’t read the previous stuff up to speed. I’m using as model two things orbiting each other. I’m going to call it a sun and a planet because it’s way too confusing not to give things names. But they don’t have to be a sun and a planet. They can be a planet and moon. They can be a proton and an electron if you want to pretend quantum mechanics isn’t a thing. They can be a wood joist and a block of rubber connected to it by a spring. That’s a legitimate central force. They can even be stuff with completely made-up names representing made-up forces. So far I’m supposing the things are attracted or repelled by a force with a strength that depends on how far they are from each other but on nothing else.

Also I’m supposing there are only two things in the universe. This is because the mathematics of two things with this kind of force is easy to do. An undergraduate mathematics or physics major can do it. The mathematics of three things is too complicated to do. I suppose somewhere around two-and-a-third things the mathematics hard enough you need an expert but the expert can do it.

Mathematicians and physicists will call this sort of problem a “central force” problem. We can make it easier by supposing the sun is at the center of the universe, or at least our coordinate system. So we don’t have to worry about it moving. It’s just there at the center, the “origin”, and it’s only the planet that moves.

Forces are tedious things to deal with. They’re vectors. In this context that makes them bundles of three quantities each related to the other two. We can avoid a lot of hassle by looking at potential energy instead. Potential energy is a scalar, a single number. Numbers are nice and easy. Calculus tells us how to go from potential energy to forces, in case we need the forces. It also tells us how to go from forces to potential energy, so we can do the easier problem instead. So we do.

To write about potential energy mathematical physicists use exactly the letter you would guess they’d use if every other letter were unavailable for some reason: V. Or U, if they prefer. I’ll stick with V. Right now I don’t want to say anything about what rule determines the values of V. I just want to allow that its value changes as the planet’s distance from the star — the radius ‘r’ of its orbit — changes. So we make that clear by writing the potential energy is V = V(r). (The potential energy might change with the mass of the planet or sun, or the strength of gravity in the universe, or whatever. But we’re going to pretend those don’t change, not for the problem we’re doing, so we don’t have to write them out.)

If you draw V(r) versus r you can discover right away circular orbits. They’re ones that are local maximums or local minimums of V(r). Physical intuition will help us here. Imagine the graph of the potential energy as if it were a smooth bowl. Drop a marble into it. Where would the marble come to rest? That’s a local minimum. The radius of that minimum is a circular orbit. (Oh, a local maximum, where the marble is at the top of a hill and doesn’t fall to either side, could be a circular orbit. But it isn’t going to be stable. The marble will roll one way or another given the slightest chance.)

The potential energy for a force like gravity or electric attraction looks like the distance, r, raised to a power. And then multiplied by some number, which is where we hide gravitational constants and masses and all that stuff. Generally, it looks like V(r) = C rn where C is some number and n is some other number. For gravity and electricity that number is -1. For two particles connected by a spring that number n is +2. Could be anything.

The trouble is if you draw these curves you realize that a marble dropped in would never come to a stop. It would roll down to the center, the planet falling into the sun. Or it would roll away forever, the planet racing into deep space. Either way it doesn’t orbit or do anything near orbiting. This seems wrong.

It’s not, though. Suppose the force is repelling, that is, the potential energy gets to be smaller and smaller numbers as the distance increases. Then the two things do race away from each other. Physics students are asked to imagine two positive charges let loose next to each other. Physics students understand they’ll go racing away from each other, even though we don’t see stuff in the real world that does that very often. We suppose the students understand, though. These days I guess you can make an animation of it and people will accept that as if it’s proof of anything.

Suppose the force is attracting. Imagine just dropping a planet out somewhere by a sun. Set it carefully just in place and let it go and get out of the way before happens. This is what we do in physics and mathematics classes, so that’s the kind of fun stuff you skipped if you majored in something else. But then we go on to make calculations about it. But that’ll orbit, right? It won’t just drop down into the sun and get melted or something?

Not so, the way I worded it. If we set the planet into space so it was holding still, not moving at all, then it will fall. Plummet, really. The planet’s attracted to the sun, and it moves in that direction, and it’s just going to keep moving that way. If it were as far from the center as the Earth is from the Sun it’ll take its time, yes, but it’ll fall into the sun and not do anything remotely like orbiting. And yet there’s still orbits. What’s wrong?

What’s wrong is a planet isn’t just sitting still there waiting to fall into the sun. Duh, you say. But why isn’t it just sitting still? That’s because it’s moving. Might be moving in any direction. We can divide that movement up into two pieces. One is the radial movement, how fast it’s moving towards or away from the center, that is, along the radius between sun and planet. If it’s a circular orbit this speed is zero; the planet isn’t moving any closer or farther away. If this speed isn’t zero it might affect how fast the planet falls into the sun, but it won’t affect the fact of whether it does or not. No more than how fast you toss a ball up inside a room changes whether it’ll eventually hit the floor. </p.

It’s the other part, the transverse velocity, that matters. This is the speed the thing is moving perpendicular to the radius. It’s possible that this is exactly zero and then the planet does drop into the sun. It’s probably not. And what that means is that the planet-and-sun system has an angular momentum. Angular momentum is like regular old momentum, only for spinning. And as with regular momentum, the total is conserved. It won’t change over time. When I was growing up this was always illustrated by thinking of ice skaters doing a spin. They pull their arms in, they spin faster. They put their arms out, they spin slower.

(Ice skaters eventually slow down, yes. That’s for the same reasons they slow down if they skate in a straight line even though regular old momentum, called “linear momentum” if you want to be perfectly clear, is also conserved. It’s because they have to get on to the rest of their routine.)

The same thing has to happen with planets orbiting a sun. If the planet moves closer to the sun, it speeds up; if it moves farther away, it slows down. To fall into the exact center while conserving angular momentum demands the planet get infinitely fast. This they don’t typically do.

There was a tipoff to this. It’s from knowing the potential energy V(r) only depends on the distance between sun and planet. If you imagine taking the system and rotating it all by any angle, you wouldn’t get any change in the forces or the way things move. It would just change the values of the coordinates you used to describe this. Mathematical physicists describe this as being “invariant”, which means what you’d imagine, under a “continuous symmetry”, which means a change that isn’t … you know, discontinuous. Rotating thing as if they were on a pivot, that is, instead of (like) reflecting them through a mirror.

And invariance under a continuous symmetry like this leads to a conservation law. This is known from Noether’s Theorem. You can find explained quite well on every pop-mathematics and pop-physics blog ever. It’s a great subject for pop-mathematics/physics writing. The idea, that the geometry of a problem tells us something about its physics and vice-versa, is important. It’s a heady thought without being so exotic as to seem counter-intuitive. And its discoverer was Dr Amalie Emmy Noether. She’s an early-20th-century demonstration of the first-class work that one can expect women to do when they’re not driven out of mathematics. You see why the topic is so near irresistible.

So we have to respect the conservation of angular momentum. This might sound like we have to give up on treating circular orbits as one-variable problems. We don’t have to just yet. We will, eventually, want to look at not just how far the planet is from the origin but also in what direction it is. We don’t need to do that yet. We have a brilliant hack.

We can represent the conservation of angular momentum as a slight repulsive force. It’s not very big if the angular momentum is small. It’s not going to be a very big force unless the planet gets close to the origin, that is, until r gets close to zero. But it does grow large and acts as if the planet is being pushed away. We consider that a pseudoforce. It appears because our choice of coordinates would otherwise miss some important physics. And that’s fine. It’s not wrong any more than, say, a hacksaw is the wrong tool to cut through PVC pipe just because you also need a vise.

This pseudoforce can be paired with a pseduo-potential energy. One of the great things about the potential-energy view of physics is that adding two forces together is as easy as adding their potential energies together. We call the sum of the original potential energy and the angular-momentum-created pseudopotential the “effective potential energy”. Far from the origin, for large radiuses r, this will be almost identical to the original potential energy. Close to the origin, this will be a function that rises up steeply. And as a result there can suddenly be a local minimum. There can be a circular orbit.

Figure 1. The potential energy of a spring — the red line — and the effective potential energy — the blue line — when the angular momentum is added as a pseudoforce. Without angular momentum in consideration the only equilibrium is at the origin. With angular momentum there’s some circular orbit, somewhere. Don’t pay attention to the numbers on the axes. They don’t mean anything.

Figure 2. The potential energy of a gravitational attraction — the red line — and the effective potential energy — the blue line — when the angular momentum is added as a pseudoforce. Without angular momentum in consideration there’s no equilibrium. The thing, a planet, falls into the center, the sun. With angular momentum there’s some circular orbit. As before the values of the numbers don’t matter and you should just ignore them.

The location of the minimum — the radius of the circular orbit — will depend on the original potential, of course. It’ll also depend on the angular momentum. The smaller the angular momentum the closer to the origin will be the circular orbit. If the angular momentum is zero we have the original potential and the planet dropping into the center again. If the angular momentum is large enough there might not even be a minimum anymore. That matches systems where the planet has escape velocity and can go plunging off into deep space. And we can see this by looking at the plot of the effective velocity even before we calculate things.

Figure 3. Gravitational potential energy — the red line — and the effective potential energy — the blue line — when angular momentum is considered. In this case the angular momentum is so large, that is, the planet is moving so fast, that there are no orbits. The planet’s reached escape velocity and can go infinitely far away from the sun.

This only goes so far as demonstrating a circular orbit should exist. Or giving some conditions for which a circular orbit wouldn’t. We might want to know something more, like where that circular orbit is. Or if it’s possible for there to be an elliptic orbit. Or other shapes. I imagine it’s possible to work this out with careful enough drawings. But at some point it gets easier to just calculate things. We’ll get to that point soon.

Reading the Comics, August 19, 2016: Mathematics Signifier Edition

I know it seems like when I write these essays I spend the most time on the first comic in the bunch and give the last ones a sentence, maybe two at most. I admit when there’s a lot of comics I have to write up at once my energy will droop. But Comic Strip Master Command apparently wants the juiciest topics sent out earlier in the week. I have to follow their lead.

Stephen Beals’s Adult Children for the 14th uses mathematics to signify deep thinking. In this case Claremont, the dog, is thinking of the Riemann Zeta function. It’s something important in number theory, so longtime readers should know this means it leads right to an unsolved problem. In this case it’s the Riemann Hypothesis. That’s the most popular candidate for “what is the most important unsolved problem in mathematics right now?” So you know Claremont is a deep-thinking dog.

The big Σ ordinary people might recognize as representing “sum”. The notation means to evaluate, for each legitimate value of the thing underneath — here it’s ‘n’ — the value of the expression to the right of the Sigma. Here that’s $\frac{1}{n^s}$. Then add up all those terms. It’s not explicit here, but context would make clear, n is positive whole numbers: 1, 2, 3, and so on. s would be a positive number, possibly a whole number.

The big capital Pi is more mysterious. It’s Sigma’s less popular brother. It means “product”. For each legitimate value of the thing underneath it — here it’s “p” — evaluate the expression on the right. Here that’s $\frac{1}{1 - \frac{1}{p^s}}$. Then multiply all that together. In the context of the Riemann Zeta function, “p” here isn’t just any old number, or even any old whole number. It’s only the prime numbers. Hence the “p”. Good notation, right? Yeah.

This particular equation, once shored up with the context the symbols live in, was proved by Leonhard Euler, who proved so much you sometimes wonder if later mathematicians were needed at all. It ties in to how often whole numbers are going to be prime, and what the chances are that some set of numbers are going to have no factors in common. (Other than 1, which is too boring a number to call a factor.) But even if Claremont did know that Euler got there first, it’s almost impossible to do good new work without understanding the old.

Charlos Gary’s Working It Out for the 14th is this essay’s riff on pie charts. Or bar charts. Somewhere around here the past week I read that a French idiom for the pie chart is the “cheese chart”. That’s a good enough bit I don’t want to look more closely and find out whether it’s true. If it turned out to be false I’d be heartbroken.

Ryan North’s Dinosaur Comics for the 15th talks about everyone’s favorite physics term, entropy. Everyone knows that it tends to increase. Few advanced physics concepts feel so important to everyday life. I almost made one expression of this — Boltzmann’s H-Theorem — a Theorem Thursday post. I might do a proper essay on it yet. Utahraptor describes this as one of “the few statistical laws of physics”, which I think is a bit unfair. There’s a lot about physics that is statistical; it’s often easier to deal with averages and distributions than the mass of real messy data.

Utahraptor’s right to point out that it isn’t impossible for entropy to decrease. It can be expected not to, in time. Indeed decent scientists thinking as philosophers have proposed that “increasing entropy” might be the only way to meaningfully define the flow of time. (I do not know how decent the philosophy of this is. This is far outside my expertise.) However: we would expect at least one tails to come up if we simultaneously flipped infinitely many coins fairly. But there is no reason that it couldn’t happen, that infinitely many fairly-tossed coins might all come up heads. The probability of this ever happening is zero. If we try it enough times, it will happen. Such is the intuition-destroying nature of probability and of infinitely large things.

Tony Cochran’s Agnes on the 16th proposes to decode the Voynich Manuscript. Mathematics comes in as something with answers that one can check for comparison. It’s a familiar role. As I seem to write three times a month, this is fair enough to say to an extent. Coming up with an answer to a mathematical question is hard. Checking the answer is typically easier. Well, there are many things we can try to find an answer. To see whether a proposed answer works usually we just need to go through it and see if the logic holds. This might be tedious to do, especially in those enormous brute-force problems where the proof amounts to showing there are a hundred zillion special cases and here’s an answer for each one of them. But it’s usually a much less hard thing to do.

Johnny Hart and Brant Parker’s Wizard of Id Classics for the 17th uses what seems like should be an old joke about bad accountants and nepotism. Well, you all know how important bookkeeping is to the history of mathematics, even if I’m never that specific about it because it never gets mentioned in the histories of mathematics I read. And apparently sometime between the strip’s original appearance (the 20th of August, 1966) and my childhood the Royal Accountant character got forgotten. That seems odd given the comic potential I’d imagine him to have. Sometimes a character’s only good for a short while is all.

Mark Anderson’s Andertoons for the 18th is the Andertoons representative for this essay. Fair enough. The kid speaks of exponents as a kind of repeating oneself. This is how exponents are inevitably introduced: as multiplying a number by itself many times over. That’s a solid way to introduce raising a number to a whole number. It gets a little strained to describe raising a number to a rational number. It’s a confusing mess to describe raising a number to an irrational number. But you can make that logical enough, with effort. And that’s how we do make the idea rigorous. A number raised to (say) the square root of two is something greater than the number raised to 1.4, but less than the number raised to 1.5. More than the number raised to 1.41, less than the number raised to 1.42. More than the number raised to 1.414, less than the number raised to 1.415. This takes work, but it all hangs together. And then we ask about raising numbers to an imaginary or complex-valued number and we wave that off to a higher-level mathematics class.

Nate Fakes’s Break of Day for the 18th is the anthropomorphic-numerals joke for this essay.

Lachowski’s Get A Life for the 18th is the sudoku joke for this essay. It’s also a representative of the idea that any mathematical thing is some deep, complicated puzzle at least as challenging as calculating one’s taxes. I feel like this is a rerun, but I don’t see any copyright dates. Sudoku jokes like this feel old, but comic strips have been known to make dated references before.

Samson’s Dark Side Of The Horse for the 19th is this essay’s Dark Side Of The Horse gag. I thought initially this was a counting-sheep in a lab coat. I’m going to stick to that mistaken interpretation because it’s more adorable that way.

• elkement (Elke Stangl) 7:20 am on Monday, 22 August, 2016 Permalink | Reply

Interesting – just learned about the Voynich manuscript for the first time a few days ago. Those coincidences!

Like

• Joseph Nebus 6:00 pm on Wednesday, 17 August, 2016 Permalink | Reply Tags: England ( 3 ), knot theory ( 8 ), maps ( 7 ), sheep, Twitter ( 3 ), Utopia, words ( 4 ), zero ( 5 )

Can’t deny that I will sometimes stockpile links of mathematics stuff to talk about. Sometimes I even remember to post it. Sometimes it’s a tweet like this, which apparently I’ve been carrying around since April:

I admit I do not know whether the claim is true. It’s plausible enough. English has many variants in England alone, and any trade will pick up its own specialized jargon. The words are fun as it is.

From the American Mathematical Society there’s this:

I talk a good bit about knot theory. It captures the imagination and it’s good for people who like to doodle. And it has a lot of real-world applications. Tangled wires, protein strands, high-energy plasmas, they all have knots in them. Some work by Paul Sutcliffe and Fabian Maucher, both of Durham University, studies tangled vortices. These are vortices that are, er, tangled together, just like you imagine. Knot theory tells us much about this kind of vortex. And it turns out these tangled vortices can untangle themselves and smooth out again, even without something to break them up and rebuild them. It gives hope for power cords everywhere.

Nerds have a streak which compels them to make blueprints of things. It can be part of the healthier side of nerd culture, the one that celebrates everything. The side that tries to fill in the real-world things that the thing-celebrated would have if it existed. So here’s a bit of news about doing that:

I like the attempt to map Sir Thomas More’s Utopia. It’s a fun exercise in matching stuff to a thin set of data. But as mentioned in the article, nobody should take it too seriously. The exact arrangement of things in Utopia isn’t the point of the book. More probably didn’t have a map for it himself.

(Although maybe. I believe I got this from Simon Garfield’s On The Map: A Mind-Expanding Exploration Of The Way The World Looks and apologize generally if I’ve got it wrong. My understanding is Robert Louis Stevenson drew a map of Treasure Island and used it to make sure references in the book were consistent. Then the map was lost in the mail to his publishers. He had to read his text and re-create it as best he could. Which, if true, makes the map all the better. It makes it so good a lost-map story that I start instinctively to doubt it; it’s so colorfully perfect, after all.)

And finally there’s this gem from the Magic Realism Bot:

Reading the Comics, August 12, 2016: Skipping Saturday Edition

I have no idea how many or how few comic strips on Saturday included some mathematical content. I was away most of the day. We made a quick trip to the Michigan’s Adventure amusement park and then to play pinball in a kind-of competitive league. The park turned out to have every person in the world there. If I didn’t wave to you from the queue on Shivering Timbers I apologize but it hasn’t got the greatest lines of sight. The pinball stuff took longer than I expected too and, long story short, we got back home about 4:15 am. So I’m behind on my comics and here’s what I did get to.

Tak Bui’s PC and Pixel for the 8th depicts the classic horror of the cleaning people wiping away an enormous amount of hard work. It’s a primal fear among mathematicians at least. Boards with a space blocked off with the “DO NOT ERASE” warning are common. At this point, though, at least, the work is probably savable. You can almost always reconstruct work, and a few smeared lines like this are not bad at all.

The work appears to be quantum mechanics work. The tell is in the upper right corner. There’s a line defining E (energy) as equal to something including $\imath \hbar \frac{\partial}{\partial t}\phi(r, t)$. This appears in the time-dependent Schrödinger Equation. It describes how probability waveforms look when the potential energies involved may change in time. These equations are interesting and impossible to solve exactly. We have to resort to approximations, including numerical approximations, all the time. So that’s why the computer lab would be working on this.

Mark Anderson’s Andertoons! Where would I be without them? Besides short on content. The strip for the 10th depicts a pollster saying to “put the margin of error at 50%”, guaranteeing the results are right. If you follow elections polls you do see the results come with a margin of error, usually of about three percent. But every sampling technique carries with it a margin of error. The point of a sample is to learn something about the whole without testing everything in it, after all. And probability describes how likely it is the quantity measured by a sample will be far from the quantity the whole would have. The logic behind this is independent of the thing being sampled. It depends on what the whole is like. It depends on how the sampling is done. It doesn’t matter whether you’re sampling voter preferences or whether there are the right number of peanuts in a bag of squirrel food.

So a sample’s measurement will almost never be exactly the same as the whole population’s. That’s just requesting too much of luck. The margin of error represents how far it is likely we’re off. If we’ve sampled the voting population fairly — the hardest part — then it’s quite reasonable the actual vote tally would be, say, one percent different from our poll. It’s implausible that the actual votes would be ninety percent different. The margin of error is roughly the biggest plausible difference we would expect to see.

Except. Sometimes we do, even with the best sampling methods possible, get a freak case. Rarely noticed beside the margin of error is the confidence level. This is what the probability is that the actual population value is within the sampling error of the sample’s value. We don’t pay much attention to this because we don’t do statistical-sampling on a daily basis. The most normal people do is read election polling results. And most election polls settle for a confidence level of about 95 percent. That is, 95 percent of the time the actual voting preference will be within the three or so percentage points of the survey. The 95 percent confidence level is popular maybe because it feels like a nice round number. It’ll be off only about one time out of twenty. It also makes a nice balance between a margin of error that doesn’t seem too large and that doesn’t need too many people to be surveyed. As often with statistics the common standard is an imperfectly-logical blend of good work and ease of use.

For the 11th Mark Anderson gives me less to talk about, but a cute bit of wordplay. I’ll take it.

Anthony Blades’s Bewley for the 12th is a rerun. It’s at least the third time this strip has turned up since I started writing these Reading The Comics posts. For the record it ran also the 27th of April, 2015 and on the 24th of May, 2013. It also suggests mathematicians have a particular tell. Try this out next time you do word problem poker and let me know how it works for you.

Julie Larson’s The Dinette Set for the 12th I would have sworn I’d seen here before. I don’t find it in my archives, though. We are meant to just giggle at Larson’s characters who bring their penny-wise pound-foolishness to everything. But there is a decent practical mathematics problem here. (This is why I thought it had run here before.) How far is it worth going out of one’s way for cheaper gas? How much cheaper? It’s simple algebra and I’d bet many simple Javascript calculator tools. The comic strip originally ran the 4th of October, 2005. Possibly it’s been rerun since.

Bill Amend’s FoxTrot Classics for the 12th is a bunch of gags about a mathematics fighting game. I think Amend might be on to something here. I assume mathematics-education contest games have evolved from what I went to elementary school on. That was a Commodore PET with a game where every time you got a multiplication problem right your rocket got closer to the ASCII Moon. But the game would probably quickly turn into people figuring how to multiply the other person’s function by zero. I know a game exploit when I see it.

The most obscure reference is in the third panel one. Jason speaks of “a z = 0 transform”. This would seem to be some kind of z-transform, a thing from digital signals processing. You can represent the amplification, or noise-removal, or averaging, or other processing of a string of digits as a polynomial. Of course you can. Everything is polynomials. (OK, sometimes you must use something that looks like a polynomial but includes stuff like the variable z raised to a negative power. Don’t let that throw you. You treat it like a polynomial still.) So I get what Jason is going for here; he’s processing Peter’s function down to zero.

That said, let me warn you that I don’t do digital signal processing. I just taught a course in it. (It’s a great way to learn a subject.) But I don’t think a “z = 0 transform” is anything. Maybe Amend encountered it as an instructor’s or friend’s idiosyncratic usage. (Amend was a physics student in college, and shows his comfort with mathematics-major talk often. He by the way isn’t even the only syndicated cartoonist with a physics degree. Bud Grace of The Piranha Club was also a physics major.) I suppose he figured “z = 0 transform” would read clearly to the non-mathematician and be interpretable to the mathematician. He’s right about that.

Finally, What I Learned Doing Theorem Thursdays

The biggest thing I learned from my Theorem Thursdays project was: don’t do this for Thursdays. The appeal is obvious. If things were a little different I’d have no problem with Thursdays. But besides being a slightly-read pop-mathematics blogger I’m also a slightly-read humor blogger. And I try to have a major piece, about seven hundred words that are more than simply commentary on how a comic strip’s gone wrong, ready for Thursday evenings my time.

That’s all my doing. It’s a relic of my thinking that the humor blog should run at least a bit like a professional syndicated columnist’s, with a fixed deadline for bigger pieces. While I should be writing more ahead of deadline than this, what I would do is get to Wednesday realizing I have two major things to write in a day. I’d have an idea for one of them, the mathematics thing, since I would pick a topic the previous Thursday. And once I’ve picked an idea the rest is easy. (Part of the process of picking is realizing whether there’s any way to make seven hundred words about something.) But that’s a lot of work for something that’s supposed to be recreational. Plus Wednesdays are, two weeks a month, a pinball league night.

So Thursday is right out, unless I get better about having first drafts of stuff done Monday night. So Thursday is right out. This has problems for future appearances of the gimmick. The alliterative pull is strong. The only remotely compelling alternative is Theorems on the Threes, maybe one the 3rd, 13th, and 23rd of the month. That leaves the 30th and 31st unaccounted for, and room for a good squabble about whether they count in an “on the threes” scheme.

There’s a lot of good stuff to say about the project otherwise. The biggest is that I had fun with it. The Theorem Thursday pieces sprawled into for-me extreme lengths, two to three thousand words. I had space to be chatty and silly and autobiographic in ways that even the A To Z projects don’t allow. Somehow those essays didn’t get nearly as long, possibly because I was writing three of them a week. I didn’t actually write fewer things in July than I did in, say, May. But it was fewer kinds of things; postings were mostly Theorem Thursdays and Reading the Comics posts. Still, overall readership didn’t drop and people seemed to quite like what I did write. It may be fewer but longer-form essays are the way I should go.

Also I found that people like stranger stuff. There’s an understandable temptation in doing pop-mathematics to look for topics that are automatically more accessible. People are afraid enough of mathematics. They have good reason to be terrified of some topic even mathematics majors don’t encounter until their fourth year. So there’s a drive to simpler topics, or topics that have fewer prerequisites, and that’s why every mathematics blogger has an essay about how the square root of two is irrational and how there’s different sizes to infinitely large sets. And that’s produced some excellent writing about topics like those, which are great topics. They have got the power to inspire awe without requiring any warming up. That’s special.

But it also means they’re hard to write anything new or compelling about if you’re like me, and in somewhere like the second hundred billion of mathematics bloggers. I can’t write anything better than what’s already gone about that. Liouville’s Theorem? That’s something I can be a good writer about. With that, I can have a blog personality. It’s like having a real personality but less work.

As I did with the Leap Day 2016 A To Z project, I threw the topics open to requests. I didn’t get many. Possibly the form gave too much freedom. Picking something to match a letter, as in the A to Z, gives a useful structure for choosing something specific. Pick a theorem from anywhere in mathematics? Something from algebra class? Something mentioned in a news report about a major breakthrough the reporter doesn’t understand but had an interesting picture? Something that you overheard the name of once without any context? How should people know what the scope of it is, before they’ve even seen a sample? And possibly people don’t actually remember the names of theorems unless they stay in mathematics or mathematics-related fields. Those folks hardly need explained theorems with names they remember. This is a hard problem to imagine people having, but it’s something I must consider.

So this is what I take away from the two-month project. There’s a lot of fun digging into the higher-level mathematics stuff. There’s an interest in it, even if it means I write longer and therefore fewer pieces. Take requests, but have a structure for taking them that makes it easy to tell what requests should look like. Definitely don’t commit to doing big things for Thursday, not without a better scheme for getting the humor blog pieces done. Free up some time Wednesday and don’t put up an awful score on Demolition Man like I did last time again. Seriously, I had a better score on The Simpsons Pinball Party than I did on Demolition Man and while you personally might not find this amusing there’s at least two people really into pinball who know how hilarious that is. (The games have wildly different point scorings. This like having a basketball score be lower than a hockey score.) That isn’t so important to mathematics blogging but it’s a good lesson to remember anyway.

• elkement (Elke Stangl) 6:21 am on Monday, 22 August, 2016 Permalink | Reply

You are such a prolific writer – kudos! Sorry that I am hardly able to catch up in some months ;-)

Like

• Joseph Nebus 8:48 pm on Sunday, 28 August, 2016 Permalink | Reply

Aw, well, thank you, trusting that prolific is a good thing. I doubt I have time to read myself myself, as my problem with comments should prove.

It happens I’ve gotten into a slow stretch the past few weeks. I’m hoping that with the start of a new season I’ll be able to get to a better balance between twice-a-week and daily.

Liked by 1 person

Reading the Comics, August 5, 2016: Word Problems Edition

And now to close out the rest of last week’s comics, those from between the 1st and the 6th of the month. It’s a smaller set. Take it up with the traffic division of Comic Strip Master Command.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. for the 2nd is mostly a word problem joke. It’s boosted some by melting into it a teacher complaining about her pay. It does make me think some about what the point of a story problem is. That is, why is the story interesting? Often it isn’t. The story is just an attempt to make a computation problem look like the sort of thing someone might wonder in the real world. This is probably why so many word problems are awful as stories and as incentive to do a calculation. There’s a natural interest that one might have in, say, the total distance travelled by a rubber ball dropped and bouncing until it finally comes to a rest. But that’s only really good for testing how one understands a geometric series. It takes more storytelling to work out why you might want to find a cube root of x2 minus eight.

Dave Whamond’s Reality Check for the 3rd uses mathematics on the blackboard as symbolic for all the problems one might have. Also a solution, if you call it that. It wouldn’t read so clearly if Ms Haversham had an English problem on the board.

Mark Anderson’s Andertoons for the 5th keeps getting funnier to me. At first reading I didn’t connect the failed mathematics problem of 2 x 0 with the caption. Once I did, I realized how snugly fit the comic is.

Greg Curfman’s Meg Classics for the 5th ran originally the 23rd of May, 1998. The application of mathematics to everyday sports was a much less developed thing back then. It’s often worthwhile to methodically study what you do, though, to see what affects the results. Here Mike has found the team apparently makes twelve missed shots for each goal. This might not seem like much of a formula, but these are kids. We shouldn’t expect formulas with a lot of variables under consideration. Since Meg suggests Mike needed to account for “the whiff factor” I have to suppose she doesn’t understand the meaning of the formula. Or perhaps she wonders why missed kicks before getting to the goal don’t matter. Well, every successful model starts out as a very simple thing to which we add complexity, and realism, as we’re able to handle them. If lucky we end up with a good balance between a model that describes what we want to know and yet is simple enough to understand.

Reading the Comics, August 1, 2016: Kalends Edition

The last day of July and first day of August saw enough mathematically-themed comic strips to fill a standard-issue entry. The rest of the week wasn’t so well-stocked. But I’ll cover those comics on Tuesday if all goes well. This may be a silly plan, but it is a plan, and I will stick to that.

Johnny Hart’s Back To BC reprints the venerable and groundbreaking comic strip from its origins. On the 31st of July it reprinted a strip from February 1959 in which Peter discovers mathematics. The work’s elaborate, much more than we would use to solve the problem today. But it’s always like that. Newly-discovered mathematics is much like any new invention or innovation, a rickety set of things that just barely work. With time we learn better how the idea should be developed. And we become comfortable with the cultural assumptions going into the work. So we get more streamlined, faster, easier-to-use mathematics in time.

The early invention of mathematics reappears the 1st of August, in a strip from earlier in February 1959. In this case it’s the sort of word problem confusion strip that any comic with a student could do. That’s a bit disappointing but Hart had much less space than he’d have for the Sunday strip above. One must do what one can.

Mac King and Bill King’s Magic in a Minute for the 31st maybe isn’t really mathematics. I guess there’s something in the modular-arithmetic implied by it. But it depends on a neat coincidence. Follow the directions in the comic about picking a number from one to twelve and counting out the letters in the word for that number. And then the letters in the word for the number you’re pointing to, and then once again. It turns out this leads to the same number. I’d never seen this before and it’s neat that it does.

Rick Detorie’s One Big Happy rerun for the 31st features Ruthie teaching, as she will. She mentions offhand the “friendlier numbers”. By this she undoubtedly means the numbers that are attractive in some way, like being nice to draw. There are “friendly numbers”, though, as number theorists see things. These are sets of numbers. For each number in this set you get the same index if you add together all its divisors (including 1 and the original number) and divide it by the original number. For example, the divisors of six are 1, 2, 3, and 6. Add that together and you get 12; divide that by the original 6 and you get 2. The divisors of 28 are 1, 2, 4, 7, 14, and 28. Add that pile of numbers together and you get 56; divide that by the original 28 and you get 2. So 6 and 28 are friendly numbers, each the friend of the other.

As often happens with number theory there’s a lot of obvious things we don’t know. For example, we know that 1, 2, 3, 4, and 5 have no friends. But we do not know whether 10 has. Nor 14 nor 20. I do not know if it is proved whether there are infinitely many sets of friendly numbers. Nor do I know if it is proved whether there are infinitely many numbers without friends. Those last two sentences are about my ignorance, though, and don’t reflect what number theory people know. I’m open to hearing from people who know better.

There are also things called “amicable numbers”, which are easier to explain and to understand than “friendly numbers”. A pair of numbers are amicable if the sum of one number’s divisors is the other number. 220 and 284 are the smallest pair of amicable numbers. Fermat found that 17,296 and 18,416 were an amicable pair; Descartes found that 9,363,584 and 9,437,056 were. Both pairs were known to Arab mathematicians already. Amicable pairs are easy enough to produce. From the tenth century we’ve had Thâbit ibn Kurrah’s rule, which lets you generate sets of numbers. Ruthie wasn’t thinking of any of this, though, and was more thinking how much fun it is to write a 7.

Terry Border’s Bent Objects for the 1st just missed the anniversary of John Venn’s birthday and all the joke Venn Diagrams that were going around at least if your social media universe looks anything like mine.

Jon Rosenberg’s Scenes from a Multiverse for the 1st is set in “Mathpinion City”, in the “Numerically Flexible Zones”. And I appreciate it’s a joke about the politicization of science. But science and mathematics are human activities. They are culturally dependent. And especially at the dawn of a new field of study there will be long and bitter disputes about what basic terms should mean. It’s absurd for us to think that the question of whether 1 + 1 should equal 2 or 3 could even arise.

But we think that because we have absorbed ideas about what we mean by ‘1’, ‘2’, ‘3’, ‘plus’, and ‘equals’ that settle the question. There was, if I understand my mathematics history right — and I’m not happy with my reading on this — a period in which it was debated whether negative numbers should be considered as less than or greater than the positive numbers. Absurd? Thermodynamics allows for the existence of negative temperatures, and those represent extremely high-energy states, things that are hotter than positive temperatures. A thing may get hotter, from 1 Kelvin to 4 Kelvin to a million Kelvin to infinitely many Kelvin to -1000 Kelvin to -6 Kelvin. If there are intuition-defying things to consider about “negative six” then we should at least be open to the proposition that the universal truths of mathematics are understood by subjective processes.

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r