## Reading the Comics, October 4, 2016: Split Week Edition Part 1

The last week in mathematically themed comics was a pleasant one. By “a pleasant one” I mean Comic Strip Master Command sent enough comics out that I feel comfortable splitting them across two essays. Look for the other half of the past week’s strips in a couple days at a very similar URL.

Mac King and Bill King’s Magic in a Minute feature for the 2nd shows off a bit of number-pattern wonder. Set numbers in order on a four-by-four grid and select four as directed and add them up. You get the same number every time. It’s a cute trick. I would not be surprised if there’s some good group theory questions underlying this, like about what different ways one could arrange the numbers 1 through 16. Or for what other size grids the pattern will work for: 2 by 2? (Obviously.) 3 by 3? 5 by 5? 6 by 6? I’m not saying I actually have been having fun doing this. I just sense there’s fun to be had there.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 2nd is based on one of those weirdnesses of the way computers add. I remember in the 90s being on a Java mailing list. Routinely it would draw questions from people worried that something was very wrong, as adding 0.01 to a running total repeatedly wouldn’t get to exactly 1.00. Java was working correctly, in that it was doing what the specifications said. It’s just the specifications didn’t work quite like new programmers expected.

What’s going on here is the same problem you get if you write down 1/3 as 0.333. You know that 1/3 plus 1/3 plus 1/3 ought to be 1 exactly. But 0.333 plus 0.333 plus 0.333 is 0.999. 1/3 is really a little bit more than 0.333, but we skip that part because it’s convenient to use only a few points past the decimal. Computers normally represent real-valued numbers with a scheme called floating point representation. At heart, that’s representing numbers with a couple of digits. Enough that we don’t normally see the difference between the number we want and the number the computer represents.

Every number base has some rational numbers it can’t represent exactly using finitely many digits. Our normal base ten, for example, has “one-third” and “two-third”. Floating point arithmetic is built on base two, and that has some problems with tenths and hundredths and thousandths. That’s embarrassing but in the main harmless. Programmers learn about these problems and how to handle them. And if they ask the mathematicians we tell them how to write code so as to keep these floating-point errors from growing uncontrollably. If they ask nice.

Random Acts of Nancy for the 3rd is a panel from Ernie Bushmiller’s Nancy. That panel’s from the 23rd of November, 1946. And it just uses mathematics in passing, arithmetic serving the role of most of Nancy’s homework. There’s a bit of spelling (I suppose) in there too, which probably just represents what’s going to read most cleanly. Random Acts is curated by Ernie Bushmiller fans Guy Gilchrist (who draws the current Nancy) and John Lotshaw.

Thom Bluemel’s Birdbrains for the 4th depicts the discovery of a new highest number. When humans discovered ‘1’ is, I would imagine, probably unknowable. Given the number sense that animals have it’s probably something that predates humans, that it’s something we’re evolved to recognize and understand. A single stroke for 1 seems to be a common symbol for the number. I’ve read histories claiming that a culture’s symbol for ‘1’ is often what they use for any kind of tally mark. Obviously nothing in human cultures is truly universal. But when I look at number symbols other than the Arabic and Roman schemes I’m used to, it is usually the symbol for ‘1’ that feels familiar. Then I get to the Thai numeral and shrug at my helplessness.

Bill Amend’s FoxTrot Classics for the 4th is a rerun of the strip from the 11th of October, 2005. And it’s made for mathematics people to clip out and post on the walls. Jason and Marcus are in their traditional nerdly way calling out sequences of numbers. Jason’s is the Fibonacci Sequence, which is as famous as mathematics sequences get. That’s the sequence of numbers in which every number is the sum of the previous two terms. You can start that sequence with 0 and 1, or with 1 and 1, or with 1 and 2. It doesn’t matter.

Marcus calls out the Perrin Sequence, which I neve heard of before either. It’s like the Fibonacci Sequence. Each term in it is the sum of two other terms. Specifically, each term is the sum of the second-previous and the third-previous terms. And it starts with the numbers 3, 0, and 2. The sequence is named for François Perrin, who described it in 1899, and that’s as much as I know about him. The sequence describes some interesting stuff. Take n points and put them in a ‘cycle graph’, which looks to the untrained eye like a polygon with n corners and n sides. You can pick subsets of those points. A maximal independent set is the biggest subset you can make that doesn’t fit into another subset. And the number of these maximal independent sets in a cyclic graph is the n-th number in the Perrin sequence. I admit this seems like a nice but not compelling thing to know. But I’m not a cyclic graph kind of person so what do I know?

Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 4th is the anthropomorphic numerals joke for this essay and I was starting to worry we wouldn’t get one.

## Theorem Thursday: A First Fixed Point Theorem

I’m going to let the Mean Value Theorem slide a while. I feel more like a Fixed Point Theorem today. As with the Mean Value Theorem there’s several of these. Here I’ll start with an easy one.

# The Fixed Point Theorem.

Back when the world and I were young I would play with electronic calculators. They encouraged play. They made it so easy to enter a number and hit an operation, and then hit that operation again, and again and again. Patterns appeared. Start with, say, ‘2’ and hit the ‘squared’ button, the smaller ‘2’ raised up from the key’s baseline. You got 4. And again: 16. And again: 256. And again and again and you got ever-huger numbers. This happened whenever you started from a number bigger than 1. Start from something smaller than 1, however tiny, and it dwindled down to zero, whatever you tried. Start at ‘1’ and it just stays there. The results were similar if you started with negative numbers. The first squaring put you in positive numbers and everything carried on as before.

This sort of thing happened a lot. Keep hitting the mysterious ‘exp’ and the numbers would keep growing forever. Keep hitting ‘sqrt’; if you started above 1, the numbers dwindled to 1. Start below and the numbers rise to 1. Or you started at zero, but who’s boring enough to do that? ‘log’ would start with positive numbers and keep dropping until it turned into a negative number. The next step was the calculator’s protest we were unleashing madness on the world.

But you didn’t always get zero, one, infinity, or madness, from repeatedly hitting the calculator button. Sometimes, some functions, you’d get an interesting number. If you picked any old number and hit cosine over and over the digits would eventually settle down to around 0.739085. Or -0.739085. Cosine’s great. Tangent … tangent is weird. Tangent does all sorts of bizarre stuff. But at least cosine is there, giving us this interesting number.

(Something you might wonder: this is the cosine of an angle measured in radians, which is how mathematicians naturally think of angles. Normal people measure angles in degrees, and that will have a different fixed point. We write both the cosine-in-radians and the cosine-in-degrees using the shorthand ‘cos’. We get away with this because people who are confused by this are too embarrassed to call us out on it. If we’re thoughtful we write, say, ‘cos x’ for radians and ‘cos x°’ for degrees. This makes the difference obvious. It doesn’t really, but at least we gave some hint to the reader.)

This all is an example of a fixed point theorem. Fixed point theorems turn up in a lot of fields. They were most impressed upon me in dynamical systems, studying how a complex system changes in time. A fixed point, for these problems, is an equilibrium. It’s where things aren’t changed by a process. You can see where that’s interesting.

In this series I haven’t stated theorems exactly much, and I haven’t given them real proofs. But this is an easy one to state and to prove. Start off with a function, which I’ll name ‘f’, because yes that is exactly how much effort goes in to naming functions. It has as a domain the interval [a, b] for some real numbers ‘a’ and ‘b’. And it has as rang the same interval, [a, b]. It might use the whole range; it might use only a subset of it. And we have to require that f is continuous.

Then there has to be at least one fixed point. There must be at last one number ‘c’, somewhere in the interval [a, b], for which f(c) equals c. There may be more than one; we don’t say anything about how many there are. And it can happen that c is equal to a. Or that c equals b. We don’t know that it is or that it isn’t. We just know there’s at least one ‘c’ that makes f(c) equal c.

You get that in my various examples. If the function f has the rule that any given x is matched to x2, then we do get two fixed points: f(0) = 02 = 0, and, f(1) = 12 = 1. Or if f has the rule that any given x is matched to the square root of x, then again we have: $f(0) = \sqrt{0} = 0$ and $f(1) = \sqrt{1} = 1$. Same old boring fixed points. The cosine is a little more interesting. For that we have $f(0.739085...) = \cos\left(0.739085...\right) = 0.739085...$.

How to prove it? The easiest way I know is to summon the Intermediate Value Theorem. Since I wrote a couple hundred words about that a few weeks ago I can assume you to understand it perfectly and have no question about how it makes this problem easy. I don’t even need to go on, do I?

… Yeah, fair enough. Well, here’s how to do it. We’ll take the original function f and create, based on it, a new function. We’ll dig deep in the alphabet and name that ‘g’. It has the same domain as f, [a, b]. Its range is … oh, well, something in the real numbers. Don’t care. The wonder comes from the rule we use.

The rule for ‘g’ is this: match the given number ‘x’ with the number ‘f(x) – x’. That is, g(a) equals whatever f(a) would be, minus a. g(b) equals whatever f(b) would be, minus b. We’re allowed to define a function in terms of some other function, as long as the symbols are meaningful. But we aren’t doing anything wrong like dividing by zero or taking the logarithm of a negative number or asking for f where it isn’t defined.

You might protest that we don’t know what the rule for f is. We’re told there is one, and that it’s a continuous function, but nothing more. So how can I say I’ve defined g in terms of a function I don’t know?

In the first place, I already know everything about f that I need to. I know it’s a continuous function defined on the interval [a, b]. I won’t use any more than that about it. And that’s great. A theorem that doesn’t require knowing much about a function is one that applies to more functions. It’s like the difference between being able to say something true of all living things in North America, and being able to say something true of all persons born in Redbank, New Jersey, on the 18th of February, 1944, who are presently between 68 and 70 inches tall and working on their rock operas. Both things may be true, but one of those things you probably use more.

In the second place, suppose I gave you a specific rule for f. Let me say, oh, f matches x with the arccosecant of x. Are you feeling any more enlightened now? Didn’t think so.

Back to g. Here’s some things we can say for sure about it. g is a function defined on the interval [a, b]. That’s how we set it up. Next point: g is a continuous function on the interval [a, b]. Remember, g is just the function f, which was continuous, minus x, which is also continuous. The difference of two continuous functions is still going to be continuous. (This is obvious, although it may take some considered thinking to realize why it is obvious.)

Now some interesting stuff. What is g(a)? Well, it’s whatever number f(a) is minus a. I can’t tell you what number that is. But I can tell you this: it’s not negative. Remember that f(a) has to be some number in the interval [a, b]. That is, it’s got to be no smaller than a. So the smallest f(a) can be is equal to a, in which case f(a) minus a is zero. And f(a) might be larger than a, in which case f(a) minus a is positive. So g(a) is either zero or a positive number.

(If you’ve just realized where I’m going and gasped in delight, well done. If you haven’t, don’t worry. You will. You’re just out of practice.)

What about g(b)? Since I don’t know what f(b) is, I can’t tell you what specific number it is. But I can tell you it’s not a positive number. The reasoning is just like above: f(b) is some number on the interval [a, b]. So the biggest number f(b) can equal is b. And in that case f(b) minus b is zero. If f(b) is any smaller than b, then f(b) minus b is negative. So g(b) is either zero or a negative number.

(Smiling at this? Good job. If you aren’t, again, not to worry. This sort of argument is not the kind of thing you do in Boring Algebra. It takes time and practice to think this way.)

And now the Intermediate Value Theorem works. g(a) is a positive number. g(b) is a negative number. g is continuous from a to b. Therefore, there must be some number ‘c’, between a and b, for which g(c) equals zero. And remember what g(c) means: f(c) – c equals 0. Therefore f(c) has to equal c. There has to be a fixed point.

And some tidying up. Like I said, g(a) might be positive. It might also be zero. But if g(a) is zero, then f(a) – a = 0. So a would be a fixed point. And similarly if g(b) is zero, then f(b) – b = 0. So then b would be a fixed point. The important thing is there must be at least some fixed point.

Now that calculator play starts taking on purposeful shape. Squaring a number could find a fixed point only if you started with a number from -1 to 1. The square of a number outside this range, such as ‘2’, would be bigger than you started with, and the Fixed Point Theorem doesn’t apply. Similarly with exponentials. But square roots? The square root of any number from 0 to a positive number ‘b’ is a number between 0 and ‘b’, at least as long as b was bigger than 1. So there was a fixed point, at 1. The cosine of a real number is some number between -1 and 1, and the cosines of all the numbers between -1 and 1 are themselves between -1 and 1. The Fixed Point Theorem applies. Tangent isn’t a continuous function. And the calculator play never settles on anything.

As with the Intermediate Value Theorem, this is an existence proof. It guarantees there is a fixed point. It doesn’t tell us how to find one. Calculator play does, though. Start from any old number that looks promising and work out f for that number. Then take that and put it back into f. And again. And again. This is known as “fixed point iteration”. It won’t give you the exact answer.

Not usually, anyway. In some freak cases it will. But what it will give, provided some extra conditions are satisfied, is a sequence of values that get closer and closer to the fixed point. When you’re close enough, then you stop calculating. How do you know you’re close enough? If you know something about the original f you can work out some logically rigorous estimates. Or you just keep calculating until all the decimal points you want stop changing between iterations. That’s not logically sound, but it’s easy to program.

That won’t always work. It’ll only work if the function f is differentiable on the interval (a, b). That is, it can’t have corners. And there have to be limits on how fast the function changes on the interval (a, b). If the function changes too fast, iteration can’t be guaranteed to work. But often if we’re interested in a function at all then these conditions will be true, or we can think of a related function that for which they are true.

And even if it works it won’t always work well. It can take an enormous pile of calculations to get near the fixed point. But this is why we have computers, and why we can leave them to work overnight.

And yet such a simple idea works. It appears in ancient times, in a formula for finding the square root of an arbitrary positive number ‘N’. (Find the fixed point for $f(x) = \frac{1}{2}\left(\frac{N}{x} + x\right)$). It creeps into problems that don’t look like fixed points. Calculus students learn of something called the Newton-Raphson Iteration. It finds roots, points where a function f(x) equals zero. Mathematics majors learn of numerical methods to solve ordinary differential equations. The most stable of these are again fixed-point iteration schemes, albeit in disguise.

## Theorem Thursday: One Mean Value Theorem Of Many

For this week I have something I want to follow up on. We’ll see if I make it that far.

# The Mean Value Theorem.

My subject line disagrees with the header just above here. I want to talk about the Mean Value Theorem. It’s one of those things that turns up in freshman calculus and then again in Analysis. It’s introduced as “the” Mean Value Theorem. But like many things in calculus it comes in several forms. So I figure to talk about one of them here, and another form in a while, when I’ve had time to make up drawings.

Calculus can split effortlessly into two kinds of things. One is differential calculus. This is the study of continuity and smoothness. It studies how a quantity changes if someting affecting it changes. It tells us how to optimize things. It tells us how to approximate complicated functions with simpler ones. Usually polynomials. It leads us to differential equations, problems in which the rate at which something changes depends on what value the thing has.

The other kind is integral calculus. This is the study of shapes and areas. It studies how infinitely many things, all infinitely small, add together. It tells us what the net change in things are. It tells us how to go from information about every point in a volume to information about the whole volume.

They aren’t really separate. Each kind informs the other, and gives us tools to use in studying the other. And they are almost mirrors of one another. Differentials and integrals are not quite inverses, but they come quite close. And as a result most of the important stuff you learn in differential calculus has an echo in integral calculus. The Mean Value Theorem is among them.

The Mean Value Theorem is a rule about functions. In this case it’s functions with a domain that’s an interval of the real numbers. I’ll use ‘a’ as the name for the smallest number in the domain and ‘b’ as the largest number. People talking about the Mean Value Theorem often do. The range is also the real numbers, although it doesn’t matter which ones.

I’ll call the function ‘f’ in accord with a longrunning tradition of not working too hard to name functions. What does matter is that ‘f’ is continuous on the interval [a, b]. I’ve described what ‘continuous’ means before. It means that here too.

And we need one more thing. The function f has to be differentiable on the interval (a, b). You maybe noticed that before I wrote [a, b], and here I just wrote (a, b). There’s a difference here. We need the function to be continuous on the “closed” interval [a, b]. That is, it’s got to be continuous for ‘a’, for ‘b’, and for every point in-between.

But we only need the function to be differentiable on the “open” interval (a, b). That is, it’s got to be continuous for all the points in-between ‘a’ and ‘b’. If it happens to be differentiable for ‘a’, or for ‘b’, or for both, that’s great. But we won’t turn away a function f for not being differentiable at those points. Only the interior. That sort of distinction between stuff true on the interior and stuff true on the boundaries is common. This is why mathematicians have words for “including the boundaries” (“closed”) and “never minding the boundaries” (“open”).

As to what “differentiable” is … A function is differentiable at a point if you can take its derivative at that point. I’m sure that clears everything up. There are many ways to describe what differentiability is. One that’s not too bad is to imagine zooming way in on the curve representing a function. If you start with a big old wobbly function it waves all around. But pick a point. Zoom in on that. Does the function stay all wobbly, or does it get more steady, more straight? Keep zooming in. Does it get even straighter still? If you zoomed in over and over again on the curve at some point, would it look almost exactly like a straight line?

If it does, then the function is differentiable at that point. It has a derivative there. The derivative’s value is whatever the slope of that line is. The slope is that thing you remember from taking Boring Algebra in high school. That rise-over-run thing. But this derivative is a great thing to know. You could approximate the original function with a straight line, with slope equal to that derivative. Close to that point, you’ll make a small enough error nobody has to worry about it.

That there will be this straight line approximation isn’t true for every function. Here’s an example. Picture a line that goes up and then takes a 90-degree turn to go back down again. Look at the corner. However close you zoom in on the corner, there’s going to be a corner. It’s never going to look like a straight line; there’s a 90-degree angle there. It can be a smaller angle if you like, but any sort of corner breaks this differentiability. This is a point where the function isn’t differentiable.

There are functions that are nothing but corners. They can be differentiable nowhere, or only at a tiny set of points that can be ignored. (A set of measure zero, as the dialect would put it.) Mathematicians discovered this over the course of the 19th century. They got into some good arguments about how that can even make sense. It can get worse. Also found in the 19th century were functions that are continuous only at a single point. This smashes just about everyone’s intuition. But we can’t find a definition of continuity that’s as useful as the one we use now and avoids that problem. So we accept that it implies some pathological conclusions and carry on as best we can.

Now I get to the Mean Value Theorem in its differential calculus pelage. It starts with the endpoints, ‘a’ and ‘b’, and the values of the function at those points, ‘f(a)’ and ‘f(b)’. And from here it’s easiest to figure what’s going on if you imagine the plot of a generic function f. I recommend drawing one. Just make sure you draw it without lifting the pen from paper, and without including any corners anywhere. Something wiggly.

Draw the line that connects the ends of the wiggly graph. Formally, we’re adding the line segment that connects the points with coordinates (a, f(a)) and (b, f(b)). That’s coordinate pairs, not intervals. That’s clear in the minds of the mathematicians who don’t see why not to use parentheses over and over like this. (We are short on good grouping symbols like parentheses and brackets and braces.)

Per the Mean Value Theorem, there is at least one point whose derivative is the same as the slope of that line segment. If you were to slide the line up or down, without changing its orientation, you’d find something wonderful. Most of the time this line intersects the curve, crossing from above to below or vice-versa. But there’ll be at least one point where the shifted line is “tangent”, where it just touches the original curve. Close to that touching point, the “tangent point”, the shifted line and the curve blend together and can’t be easily told apart. As long as the function is differentiable on the open interval (a, b), and continuous on the closed interval [a, b], this will be true. You might convince yourself of it by drawing a couple of curves and taking a straightedge to the results.

This is an existence theorem. Like the Intermediate Value Theorem, it doesn’t tell us which point, or points, make the thing we’re interested in true. It just promises us that there is some point that does it. So it gets used in other proofs. It lets us mix information about intervals and information about points.

It’s tempting to try using it numerically. It looks as if it justifies a common differential-calculus trick. Suppose we want to know the value of the derivative at a point. We could pick a little interval around that point and find the endpoints. And then find the slope of the line segment connecting the endpoints. And won’t that be close enough to the derivative at the point we care about?

Well. Um. No, we really can’t be sure about that. We don’t have any idea what interval might make the derivative of the point we care about equal to this line-segment slope. The Mean Value Theorem won’t tell us. It won’t even tell us if there exists an interval that would let that trick work. We can’t invoke the Mean Value Theorem to let us get away with that.

Often, though, we can get away with it. Differentiable functions do have to follow some rules. Among them is that if you do pick a small enough interval then approximations that look like this will work all right. If the function flutters around a lot, we need a smaller interval. But a lot of the functions we’re interested in don’t flutter around that much. So we can get away with it. And there’s some grounds to trust in getting away with it. The Mean Value Theorem isn’t any part of the grounds. It just looks so much like it ought to be.

I hope on a later Thursday to look at an integral-calculus form of the Mean Value Theorem.

## What’s The Shortest Proof I’ve Done?

I didn’t figure to have a bookend for last week’s “What’s The Longest Proof I’ve Done? question. I don’t keep track of these things, after all. And the length of a proof must be a fluid concept. If I show something is a direct consequence of a previous theorem, is the proof’s length the two lines of new material? Or is it all the proof of the previous theorem plus two new lines?

I would think the shortest proof I’d done was showing that the logarithm of 1 is zero. This would be starting from the definition of the natural logarithm of a number x as the definite integral of 1/t on the interval from 1 to x. But that requires a bunch of analysis to support the proof. And the Intermediate Value Theorem. Does that stuff count? Why or why not?

But this happened to cross my desk: The Shortest-Known Paper Published in a Serious Math Journal: Two Succinct Sentences, an essay by Dan Colman. It reprints a paper by L J Lander and T R Parkin which appeared in the Bulletin of the American Mathematical Society in 1966.

It’s about Euler’s Sums of Powers Conjecture. This is a spinoff of Fermat’s Last Theorem. Leonhard Euler observed that you need at least two whole numbers so that their squares add up to a square. And you need three cubes of whole numbers to add up to the cube of a whole number. Euler speculated you needed four whole numbers so that their fourth powers add up to a fourth power, five whole numbers so that their fifth powers add up to a fifth power, and so on.

And it’s not so. Lander and Parkin found that this conjecture is false. They did it the new old-fashioned way: they set a computer to test cases. And they found four whole numbers whose fifth powers add up to a fifth power. So the quite short paper answers a long-standing question, and would be hard to beat for accessibility.

There is another famous short proof sometimes credited as the most wordless mathematical presentation. Frank Nelson Cole gave it on the 31st of October, 1903. It was about the Mersenne number 267-1, or in human notation, 147,573,952,589,676,412,927. It was already known the number wasn’t prime. (People wondered because numbers of the form 2n-1 often lead us to perfect numbers. And those are interesting.) But nobody knew which factors it was. Cole gave his talk by going up to the board, working out 267-1, and then moving to the other side of the board. There he wrote out 193,707,721 × 761,838,257,287, and showed what that was. Then, per legend, he sat down without ever saying a word, and took in the standing ovation.

I don’t want to cast aspersions on a great story like that. But mathematics is full of great stories that aren’t quite so. And I notice that one of Cole’s doctoral students was Eric Temple Bell. Bell gave us a great many tales of mathematics history that are grand and great stories that just weren’t so. So I want it noted that I don’t know where we get this story from, or how it may have changed in the retellings. But Cole’s proof is correct, at least according to Octave.

So not every proof is too long to fit in the universe. But then I notice that Mathworld’s page regarding the Euler Sum of Powers Conjecture doesn’t cite the 1966 paper. It cites instead Lander and Parkin’s “A Counterexample to Euler’s Sum of Powers Conjecture” from Mathematics of Computation volume 21, number 97, of 1967. There the paper has grown to three pages, although it’s only a couple paragraphs of one page and three lines of citation on the third. It’s not so easy to read either, but it does explain how they set about searching for counterexamples. But it may give you some better idea of how numerical mathematicians find things.

## Theorem Thursday: What Is Cramer’s Rule?

KnotTheorist asked for this one during my appeal for theorems to discuss. And I’m taking an open interpretation of what a “theorem” is. I can do a rule.

# Cramer’s Rule

I first learned of Cramer’s Rule in the way I expect most people do. It was an algebra course. I mean high school algebra. By high school algebra I mean you spend roughly eight hundred years learning ways to solve for x or to plot y versus x. Then take a pause for polar coordinates and matrices. Then you go back to finding both x and y.

Cramer’s Rule came up in the context of solving simultaneous equations. You have more than one variable. So x and y. Maybe z. Maybe even a w, before whoever set up the problem gives up and renames everything x1 and x2 and x62 and all that. You also have more than one equation. In fact, you have exactly as many equations as you have variables. Are there any sets of values those variables can have which make all those variable true simultaneously? Thus the imaginative name “simultaneous equations” or the search for “simultaneous solutions”.

If all the equations are linear then we can always say whether there’s simultaneous solutions. By “linear” we mean what we always mean in mathematics, which is, “something we can handle”. But more exactly it means the equations have x and y and whatever other variables only to the first power. No x-squared or square roots of y or tangents of z or anything. (The equations are also allowed to omit a variable. That is, if you have one equation with x, y, and z, and another with just x and z, and another with just y and z, that’s fine. We pretend the missing variable is there and just multiplied by zero, and proceed as before.) One way to find these solutions is with Cramer’s Rule.

Cramer’s Rule sets up some matrices based on the system of equations. If the system has two equations, it sets up three matrices. If the system has three equations, it sets up four matrices. If the system has twelve equations, it sets up thirteen matrices. You see the pattern here. And then you can take the determinant of each of these matrices. Dividing the determinant of one of these matrices by another one tells you what value of x makes all the equations true. Dividing the determinant of another matrix by the determinant of one of these matrices tells you which values of y makes all the equations true. And so on. The Rule tells you which determinants to use. It also says what it means if the determinant you want to divide by equals zero. It means there’s either no set of simultaneous solutions or there’s infinitely many solutions.

This gets dropped on us students in the vain effort to convince us knowing how to calculate determinants is worth it. It’s not that determinants aren’t worth knowing. It’s just that they don’t seem to tell us anything we care about. Not until we get into mappings and calculus and differential equations and other mathematics-major stuff. We never see it in high school.

And the hard part of determinants is that for all the cool stuff they tell us, they take forever to calculate. The determinant for a matrix with two rows and two columns isn’t bad. Three rows and three columns is getting bad. Four rows and four columns is awful. The determinant for a matrix with five rows and five columns you only ever calculate if you’ve made your teacher extremely cross with you.

So there’s the genius and the first problem with Cramer’s Rule. It takes a lot of calculating. Many any errors along the way with the calculation and your work is wrong. And worse, it won’t be wrong in an obvious way. You can find the error only by going over every single step and hoping to catch the spot where you, somehow, got 36 times -7 minus 21 times -8 wrong.

The second problem is nobody in high school algebra mentions why systems of linear equations should be interesting to solve. Oh, maybe they’ll explain how this is the work you do to figure out where two straight lines intersect. But that just shifts the “and we care because … ?” problem back one step. Later on we might come to understand the lines represent cases where something we’re interested in is true, or where it changes from true to false.

This sort of simultaneous-solution problem turns up naturally in optimization problems. These are problems where you try to find a maximum subject to some constraints. Or find a minimum. Maximums and minimums are the same thing when you think about them long enough. If all the constraints can be satisfied at once and you get a maximum (or minimum, whatever), great! If they can’t … Well, you can study how close it’s possible to get, and what happens if you loosen one or more constraint. That’s worth knowing about.

The third problem with Cramer’s Rule is that, as a method, it kind of sucks. We can be convinced that simultaneous linear equations are worth solving, or at least that we have to solve them to get out of High School Algebra. And we have computers. They can grind away and work out thirteen determinants of twelve-row-by-twelve-column matrices. They might even get an answer back before the end of the term. (The amount of work needed for a determinant grows scary fast as the matrix gets bigger.) But all that work might be meaningless.

The trouble is that Cramer’s Rule is numerically unstable. Before I even explain what that is you already sense it’s a bad thing. Think of all the good things in your life you’ve heard described as unstable. Fair enough. But here’s what we mean by numerically unstable.

Is 1/3 equal to 0.3333333? No, and we know that. But is it close enough? Sure, most of the time. Suppose we need a third of sixty million. 0.3333333 times 60,000,000 equals 19,999,998. That’s a little off of the correct 20,000,000. But I bet you wouldn’t even notice the difference if nobody pointed it out to you. Even if you did notice it you might write off the difference. “If we must, make up the difference out of petty cash”, you might declare, as if that were quite sensible in the context.

And that’s so because this multiplication is numerically stable. Make a small error in either term and you get a proportional error in the result. A small mistake will — well, maybe it won’t stay small, necessarily. But it’ll not grow too fast too quickly.

So now you know intuitively what an unstable calculation is. This is one in which a small error doesn’t necessarily stay proportionally small. It might grow huge, arbitrarily huge, and in few calculations. So your answer might be computed just fine, but actually be meaningless.

This isn’t because of a flaw in the computer per se. That is, it’s working as designed. It’s just that we might need, effectively, infinitely many digits of precision for the result to be correct. You see where there may be problems achieving that.

Cramer’s Rule isn’t guaranteed to be nonsense, and that’s a relief. But it is vulnerable to this. You can set up problems that look harmless but which the computer can’t do. And that’s surely the worst of all worlds, since we wouldn’t bother calculating them numerically if it weren’t too hard to do by hand.

(Let me direct the reader who’s unintimidated by mathematical jargon, and who likes seeing a good Wikipedia Editors quarrel, to the Cramer’s Rule Talk Page. Specifically to the section “Cramer’s Rule is useless.”)

I don’t want to get too down on Cramer’s Rule. It’s not like the numerical instability hurts every problem you might use it on. And you can, at the cost of some more work, detect whether a particular set of equations will have instabilities. That requires a lot of calculation but if we have the computer to do the work fine. Let it. And a computer can limit its numerical instabilities if it can do symbolic manipulations. That is, if it can use the idea of “one-third” rather than 0.3333333. The software package Mathematica, for example, does symbolic manipulations very well. You can shed many numerical-instability problems, although you gain the problem of paying for a copy of Mathematica.

If you just care about, or just need, one of the variables then what the heck. Cramer’s Rule lets you solve for just one or just some of the variables. That seems like a niche application to me, but it is there.

And the Rule re-emerges in pure analysis, where numerical instability doesn’t matter. When we look to differential equations, for example, we often find solutions are combinations of several independent component functions. Bases, in fact. Testing whether we have found independent bases can be done through a thing called the Wronskian. That’s a way that Cramer’s Rule appears in differential equations.

Wikipedia also asserts the use of Cramer’s Rule in differential geometry. I believe that’s a true statement, and that it will be reflected in many mechanics problems. In these we can use our knowledge that, say, energy and angular momentum of a system are constant values to tell us something of how positions and velocities depend on each other. But I admit I’m not well-read in differential geometry. That’s something which has indeed caused me pain in my scholarly life. I don’t know whether differential geometers thank Cramer’s Rule for this insight or whether they’re just glad to have got all that out of the way. (See the above Wikipedia Editors quarrel.)

I admit for all this talk about Cramer’s Rule I haven’t said what it is. Not in enough detail to pass your high school algebra class. That’s all right. It’s easy to find. MathWorld has the rule in pretty simple form. Mathworld does forget to define what it means by the vector d. (It’s the vector with components d1, d2, et cetera.) But that’s enough technical detail. If you need to calculate something using it, you can probably look closer at the problem and see if you can do it another way instead. Or you’re in high school algebra and just have to slog through it. It’s all right. Eventually you can put x and y aside and do geometry.

• #### KnotTheorist 3:44 pm on Thursday, 9 June, 2016 Permalink | Reply

Thanks for the post! It’s good to know I’m not the only one who wondered about the usefulness of Cramer’s Rule for computation. That was part of my motivation for asking about it, actually; I was curious about what, if anything, it was good for.

Also, thanks for linking to that Wikipedia article. It was an interesting read.

Liked by 1 person

• #### Joseph Nebus 3:21 am on Saturday, 11 June, 2016 Permalink | Reply

I’m happy to be of service. And, as I say, it’s not like the rule is ever wrong. The worst you can hold against it is that it’s not the quickest or most stable way of doing a lot of problems. But if you can control that, then it’s a tool you have.

But I admit not using it except as the bit that justifies some work in later proofs since I got out of high school algebra. It’s so beautiful a thing it seems like it ought to be useful.

Like

• #### xianyouhoule 12:08 pm on Friday, 10 June, 2016 Permalink | Reply

Can we understand Cramer’s Rule

in a geometrical way??

Like

• #### xianyouhoule 12:16 pm on Friday, 10 June, 2016 Permalink | Reply

Can we understand Cramer’s Rule in a geometrical way??

Like

• #### Joseph Nebus 3:32 am on Saturday, 11 June, 2016 Permalink | Reply

Happy to help.

We can work out geometric interpretations of Cramer’s Rule. But I’m not sure how compelling they are. They come about from looking at sets of linear equations as a linear transformation. That is, they’re stretching out and rotating and adding together directions in space. Then the determinant of the matrix corresponding to a set of equations has a good geometric interpretation. It’s how much a unit square gets expanded, or shrunk, by the projection the matrix represents.

For Cramer’s Rule we look at the determinants of two matrices. One of them is the matrix of the original set of equations. And the other is a similar matrix that has, in the place of (say) constants-times-x, the constant numbers from the right-hand-sides of the original equations. The constants with no variables on them. This matrix projects space in a slightly different way.

So Cramer’s Rule tells us that the value of x (say) which makes all the equations true is equal to how much the modified matrix with constants instead of x-coefficients expands space, divided by how much the original matrix expands space. And similarly for y and for z and whatever other coordinates you have. And as I say, Wikipedia’s entry on Cramer’s Rule has some fair pictures showing this.

I admit I’m not sure that’s compelling, though. I don’t have a good answer offhand for why we should expect these ratios to be important, or why these particular modified matrices should enter into it. But it is there and it might help someone at least remember how this rule works.

Like

Like

Like

• #### howardat58 3:48 pm on Thursday, 16 June, 2016 Permalink | Reply

I am of the opinion that cramers rule sucks. What is wrong with Gaussian Elimination ????????

(apart from the relatively enormous speed !!!)

Like

• #### Joseph Nebus 4:34 am on Friday, 17 June, 2016 Permalink | Reply

Well, speed is the big strike against Gaussian Elimination. But Gaussian Elimination is a lot better off than Cramer’s Rule on that count. Gaussian Elimination also isn’t numerically stable for every matrix. But for diagonally dominant or positive-definite matrices it is, and that’s usually good enough.

As often happens with numerical techniques, nothing’s quite right all the time. Best you can do is have some idea what’s usually all right.

Like

## A Leap Day 2016 Mathematics A To Z: Polynomials

I have another request for today’s Leap Day Mathematics A To Z term. Gaurish asked for something exciting. This should be less challenging than Dedekind Domains. I hope.

## Polynomials.

Polynomials are everything. Everything in mathematics, anyway. If humans study it, it’s a polynomial. If we know anything about a mathematical construct, it’s because we ran across it while trying to understand polynomials.

I exaggerate. A tiny bit. Maybe by three percent. But polynomials are big.

They’re easy to recognize. We can get them in pre-algebra. We make them out of a set of numbers called coefficients and one or more variables. The coefficients are usually either real numbers or complex-valued numbers. The variables we usually allow to be either real or complex-valued numbers. We take each coefficient and multiply it by some power of each variable. And we add all that up. So, polynomials are things that look like these things:

$x^2 - 2x + 1$
$12 x^4 + 2\pi x^2 y^3 - 4x^3 y - \sqrt{6}$
$\ln(2) + \frac{1}{2}\left(x - 2\right) - \frac{1}{2 \cdot 2^2}\left(x - 2\right)^2 + \frac{1}{2 \cdot 2^3}\left(x - 2\right)^3 - \frac{1}{2 \cdot 2^4}\left(x - 2\right)^4 + \cdots$
$a_n x^n + a_{n - 1}x^{n - 1} + a_{n - 2}x^{n - 2} + \cdots + a_2 x^2 + a_1 x^1 + a_0$

The first polynomial maybe looks nice and comfortable. The second may look a little threatening, what with it having two variables and a square root in it, but it’s not too weird. The third is an infinitely long polynomial; you’re supposed to keep going on in that pattern, adding even more terms. The last is a generic representation of a polynomial. Each number a0, a1, a2, et cetera is some coefficient that we in principle know. It’s a good way of representing a polynomial when we want to work with it but don’t want to tie ourselves down to a particular example. The highest power we raise a variable to we call the degree of the polynomial. A second-degree polynomial, for example, has an x2 in it, but not an x3 or x4 or x18 or anything like that. A third-degree polynomial has an x3, but not x to any higher powers. Degree is a useful way of saying roughly how long a polynomial is, so it appears all over discussions of polynomials.

But why do we like polynomials? Why like them so much that MathWorld lists 1,163 pages that mention polynomials?

It’s because they’re great. They do everything we’d ever want to do and they’re great at it. We can add them together as easily as we add regular old numbers. We can subtract them as well. We can multiply and divide them. There’s even prime polynomials, just like there are prime numbers. They take longer to work out, but they’re not harder.

And they do great stuff in advanced mathematics too. In calculus we want to take derivatives of functions. Polynomials, we always can. We get another polynomial out of that. So we can keep taking derivatives, as many as we need. (We might need a lot of them.) We can integrate too. The integration produces another polynomial. So we can keep doing that as long as we need too. (We need to do this a lot, too.) This lets us solve so many problems in calculus, which is about how functions work. It also lets us solve so many problems in differential equations, which is about systems whose change depends on the current state of things.

That’s great for analyzing polynomials, but what about things that aren’t polynomials?

Well, if a function is continuous, then it might as well be a polynomial. To be a little more exact, we can set a margin of error. And we can always find polynomials that are less than that margin of error away from the original function. The original function might be annoying to deal with. The polynomial that’s as close to it as we want, though, isn’t.

Not every function is continuous. Most of them aren’t. But most of the functions we want to do work with are, or at least are continuous in stretches. Polynomials let us understand the functions that describe most real stuff.

Nice for mathematicians, all right, but how about for real uses? How about for calculations?

Oh, polynomials are just magnificent. You know why? Because you can evaluate any polynomial as soon as you can add and multiply. (Also subtract, but we think of that as addition.) Remember, x4 just means “x times x times x times x”, four of those x’s in the product. All these polynomials are easy to evaluate.

Even better, we don’t have to evaluate them. We can automate away the evaluation. It’s easy to set a calculator doing this work, and it will do it without complaint and with few unforeseeable mistakes.

Now remember that thing where we can make a polynomial close enough to any continuous function? And we can always set a calculator to evaluate a polynomial? Guess that this means about continuous functions. We have a tool that lets us calculate stuff we would want to know. Things like arccosines and logarithms and Bessel functions and all that. And we get nice easy to understand numbers out of them. For example, that third polynomial I gave you above? That’s not just infinitely long. It’s also a polynomial that approximates the natural logarithm. Pick a positive number x that’s between 0 and 4 and put it in that polynomial. Calculate terms and add them up. You’ll get closer and closer to the natural logarithm of that number. You’ll get there faster if you pick a number near 2, but you’ll eventually get there for whatever number you pick. (Calculus will tell us why x has to be between 0 and 4. Don’t worry about it for now.)

So through polynomials we can understand functions, analytically and numerically.

And they keep revealing things to us. We discovered complex-valued numbers because we wanted to find roots, values of x that make a polynomial of x equal to zero. Some formulas worked well for third- and fourth-degree polynomials. (They look like the quadratic formula, which solves second-degree polynomials. The big difference is nobody remembers what they are without looking them up.) But the formulas sometimes called for things that looked like square roots of negative numbers. Absurd! But if you carried on as if these square roots of negative numbers meant something, you got meaningful answers. And correct answers.

We wanted formulas to solve fifth- and higher-degree polynomials exactly. We can do this with second and third and fourth-degree polynomials, after all. It turns out we can’t. Oh, we can solve some of them exactly. The attempt to understand why, though, helped us create and shape group theory, the study of things that look like but aren’t numbers.

Polynomials go on, sneaking into everything. We can look at a square matrix and discover its characteristic polynomial. This allows us to find beautifully-named things like eigenvalues and eigenvectors. These reveal secrets of the matrix’s structure. We can find polynomials in the formulas that describe how many ways to split up a group of things into a smaller number of sets. We can find polynomials that describe how networks of things are connected. We can find polynomials that describe how a knot is tied. We can even find polynomials that distinguish between a knot and the knot’s reflection in the mirror.

Polynomials are everything.

• #### gaurish 3:40 pm on Monday, 4 April, 2016 Permalink | Reply

Beautiful post!
Recently I studied Taylor’s Theorem & Weierstrass approximation theorem. These theorems illustrate your ideas :)

Like

• #### Joseph Nebus 6:38 pm on Monday, 4 April, 2016 Permalink | Reply

Thank you kindly. And yeah, the Taylor Theorem and Weierstrauss Approximation Theorem are the ideas I was sneaking around without trying to get too technical. (Maybe I should start including a postscript of technical talk to these essays.)

Like

## Terrible And Less-Terrible Things with Pi

We are coming around “Pi Day”, the 14th of March, again. I don’t figure to have anything thematically appropriate for the day. I figure to continue the Leap Day 2016 Mathematics A To Z, and I don’t tend to do a whole two posts in a single day. Two just seems like so many, doesn’t it?

But I would like to point people who’re interested in some π-related stuff to what I posted last year. Those posts were:

• Calculating Pi Terribly, in which I show a way to work out the value of π that’s fun and would take forever. I mean, yes, properly speaking they all take forever, but this takes forever just to get a couple of digits right. It might be fun to play with but don’t use this to get your digits of π. Really.
• Calculating Pi Less Terribly, in which I show a way to do better. This doesn’t lend itself to any fun side projects. It’s just calculations. But it gets you accurate digits a lot faster.

## Spaghetti Mathematics

Let’s ease into Monday. Win Smith with the Well Tempered Spreadsheet blog encountered one of those idle little puzzles that captures the imagination and doesn’t let go. It starts as many will with spaghetti.

I’m sure you’re intrigued too. It’s not the case that any old splitting of a strand of spaghetti will give you three pieces you can make into a triangle. You need the lengths of the three pieces to satisfy what’s imaginatively called the Triangle Inequality. That inequality demands the lengths of any two sides have to be greater than the length of the third side. So, suppose we start with spaghetti that’s 12 inches long, and we have it cut into pieces 5, 4, and 3 inches long: that’s fine. If we have it cut into pieces 9, 2, and 1 inches long, we’re stuck.

The Triangle Inequality is often known as the Cauchy Inequality, or the Cauchy-Schwarz Inequality, or the Cauchy-Bunyakovsky-Schwarz Inequality, or if that’s getting too long the CBS Inequality. And some pranksters reorder it to the Cauchy-Schwarz-Bunyakovsky Inequality. The Cauchy (etc) Inequality isn’t quite the same thing as the Triangle Inequality. But it’s an important and useful idea, and the Cauchy (etc) Inequality has the Triangle Inequality as one of its consequences. The name of it so overflows with names because mathematics history is complicated. Augustin-Louis Cauchy published the first proof of it, but for the special case of sums of series. Viktor Bunyakovsky proved a similar version of it for integrals, and has a name that’s so very much fun to say. Hermann Amandus Schwarz first put the proof into its modern form. So who deserves credit for it? Good question. If it influences your decision, know that Cauchy was incredibly prolific and has plenty of things named for him already. He’s got, without exaggeration, about eight hundred papers to his credit. Collecting all his work into a definitive volume took from 1882 to 1974.

Back to the spaghetti. The problem’s a fun one and if you follow the Twitter link above you’ll see the gritty work of mathematicians reasoning out the problem. As with every probability problem ever, the challenge is defining exactly what you’re looking for the probability of. This we call finding the “sample space”, the set of all the possible outcomes and how likely each of them are. Subtle changes in how you think of the problem will change whether you are right.

Smith cleans things up a bit, but preserves the essence of how the answer worked out. The answer that looks most likely correct was developed partly by reasoning and partly by numerical simulation. Numerical simulation is a great blessing for probability problems. Often the easiest way to figure out how likely something is will be trying it. But this does require working out the sample space, and what parts of the sample space represent what you’re interested in. With the information the numerical simulation provided, Smith was able to go back and find an analytic, purely reason-based, answer that looks plausible.

• #### elkement (Elke Stangl) 8:27 am on Tuesday, 19 January, 2016 Permalink | Reply

I’d be very interested in this – but the link to the blog does not work: It’s an a tag without a href attribute.

Like

• #### Joseph Nebus 11:07 pm on Tuesday, 19 January, 2016 Permalink | Reply

Oh, that’s embarrassing. It looks like I wrote it up originally and missed a quote mark in the attribute tag, and of course WordPress deleted that rather than give me a hint something was wrong. (Well, I should’ve checked the link before an after posting too, admittedly.) Thank you. It ought to be working now.

Liked by 1 person

## Reading the Comics, January 15, 2015: Electric Brains and Klein Bottles Edition

I admit I don’t always find a theme running through Comic Strip Master Command’s latest set of mathematically-themed comics. The edition names are mostly so that I can tell them apart when I see a couple listed in the Popular Posts roundup anyway.

Jimmy Hatlo’s Little Iodine for the 12th of January, 2016. Originally run the 7th of November, 1954.

Jimmy Hatlo’s Little Iodine is a vintage comic strip from the 1950s. It strikes me as an unlicensed adaptation of Baby Schnooks, but that’s not something for me to worry about. The particular strip, originally from the 7th of November, 1954 (and just run the 12th of January this year) interests me for its ancient views of computers. It’s from the days they were called “electric brains”. I’m also impressed that the machine on display early on is able to work out the “square root of 7921 x2 y2”. The square root of 7921 is no great feat. Being able to work with the symbols of x and y without knowing what they stand for, though, does impress me. I’m not sure there were computers which could handle that sort of symbolic manipulation in 1954. That sort of ability to work with a quantity by name rather than value is what we would buy Mathematica for, if we could afford it. It’s also at least a bit impressive that someone knows the square of 89 offhand. All told, I think this is my favorite of this essay’s set of strips. But it’s a weak field considering none of them are “students giving a snarky reply to a homework/exam/blackboard question”.

Joe Martin’s Willy and Ethel for the 13th of January is a percentages joke. Some might fault it for talking about people giving 110 percent, but of course, what is “100 percent”? If it’s the standard amount of work being done then it does seem like ten people giving 110 percent gets the job done as quickly as eleven people doing 100 percent. If work worked like that.

Joe Martin’s Willy and Ethel for the 13th of January, 2016. The link will likely expire in mid-February.

Steve Sicula’s Home and Away for the 13th (a rerun from the 8th of October, 2004) gives a wrongheaded application of a decent principle. The principle is that of taking several data points and averaging their value. The problem with data is that it’s often got errors in it. Something weird happened and it doesn’t represent what it’s supposed to. Or it doesn’t represent it well. By averaging several data points together we can minimize the influence of a fluke reading. Or if we’re measuring something that changes in time, we might use a running average of the last several sampled values. In this way a short-term spike or a meaningless flutter will be minimized. We can avoid wasting time reacting to something that doesn’t matter. (The cost of this, though, is that if a trend is developing we will notice it later than we otherwise would.) Still, sometimes a data point is obviously wrong.

Zach Weinersmith’s Saturday Morning Breakfast Cereal wanted my attention, and so on the 13th it did a joke about Zeno’s Paradox. There are actually four classic Zeno’s Paradoxes, although the one riffed on here I think is the most popular. This one — the idea that you can’t finish something (leaving a room is the most common form) because you have to get halfway done, and have to get halfway to being halfway done, and halfway to halfway to halfway to being done — is often resolved by people saying that Zeno just didn’t understand that an infinite series could converge. That is, that you can add together infinitely many numbers and get a finite number. I’m inclined to think Zeno did not, somehow, think it was impossible to leave rooms. What the paradoxes as a whole get to are questions about space and time: they’re either infinitely divisible or they’re not. And either way produces effects that don’t seem to quite match our intuitions.

The next day Saturday Morning Breakfast Cereal does a joke about Klein bottles. These are famous topological constructs. At least they’re famous in the kinds of places people talk about topological constructs. It’s much like the Möbius strip, a ribbon given a twist and joined back to its edge. The Klein bottle similarly you can imagine as a cylinder stretched out into the fourth dimension, given a twist, then joined back to itself. We can’t really do this, what with it being difficult to craft four-dimensional objects. But we can imagine this, and it creates an object that doesn’t have a boundary, and has only one side. There’s not an inside or an outside. There’s no making this in the real world, but we can make nice-looking approximations, usually as bottles.

Ruben Bolling’s Super-Fun-Pak Comix for the 13th of January is an extreme installment of Chaos Butterfly. The trouble with touching Chaos Butterfly to cause disasters is that you don’t know — you can’t know — what would have happened had you not touched the butterfly. You change your luck, but there’s no way to tell whether for the better or worse. One of the commenters at Gocomics.com alludes to this problem.

Jon Rosenberg’s Scenes From A Multiverse for the 13th of January makes quite literal quantum mechanics talk about probability waves and quantum foam and the like. The wave formulation of quantum mechanics, the most popular and accessible one, describes what’s going on in equations that look much like the equations for things diffusing into space. And quantum mechanical problems are often solved by supposing that the probability distribution we’re interested in can be broken up into a series of sinusoidal waves. Representing a complex function as a set of waves is a common trick, not just in quantum mechanics, because it works so well so often. Sinusoidal waves behave in nice, predictable ways for most differential equations. So converting a hard differential equation problem into a long string of relatively easy differential equation problems is usually a good trade.

Tom Thaves’s Frank and Ernest for the 14th of January ties together the baffling worlds of grammar and negative numbers. It puts Frank and Ernest on panel with Euclid, who’s a fair enough choice to represent the foundation of (western) mathematics. He’s famous for the geometry we now call Euclidean. That’s the common everyday kind of blackboards and tabletops and solid cubes and spheres. But among his writings are compilations of arithmetic, as understood at the time. So if we know anyone in Ancient Greece to have credentials to talk about negative numbers it’s him. But the choice of Euclid traps the panel into an anachronism: the Ancient Greeks just didn’t think of negative numbers. They could work through “a lack of things” or “a shortage of something”, but a negative? That’s a later innovation. But it’s hard to think of a good rewriting of the joke. You might have Isaac Newton be consulted, but Newton makes normal people think of gravity and physics, confounding the mathematics joke. There’s a similar problem with Albert Einstein. Leibniz or Gauss should be good, but I suspect they’re not the household names that even Euclid is. And if we have to go “less famous mathematician than Gauss” we’re in real trouble. (No, not Andrew Wiles. Normal people know him as “the guy that proved Fermat’s thing”, and that’s too many words to fit on panel.) Perhaps the joke can’t be made to read cleanly and make good historic sense.

## The Set Tour, Part 11: Doughnuts And Lots Of Them

I’ve been slow getting back to my tour of commonly-used domains for several reasons. It’s been a busy season. It’s so much easier to plan out writing something than it is to write something. The usual. But one of my excuses this time is that I’m not sure the set I want to talk about is that common. But I like it, and I imagine a lot of people will like it. So that’s enough.

## T and Tn

T stands for the torus. Or the toroid, if you prefer. It’s a fun name. You know the shape. It’s a doughnut. Take a cylindrical tube and curl it around back on itself. Don’t rip it or fold it. That’s hard to do with paper or a sheet of clay or other real-world stuff. But we can imagine it easily enough. I suppose we can make a computer animation of it, if by ‘we’ we mean ‘you’.

We don’t use the whole doughnut shape for T. And no, we don’t use the hole either. What we use is the surface of the doughnut, the part that could get glazed. We ignore the inside, just the same way we had S represent the surface of a sphere (or the edge of a circle, or the boundary of a hypersphere). If there is a common symbol for the torus including the interior I don’t know it. I’d be glad to hear if someone had.

What good is the surface of a torus, though? Well, it’s a neat shape. Slice it in one direction, the way you’d cut a bagel in half, and at the slice you get the shape of a washer, the kind you fit around a nut and bolt. (An annulus, to use the trade term.) Slice it perpendicular to that, the way you’d cut it if you’re one of those people who eats half doughnuts to the amazement of the rest of us, and at the slice you get two detached circles. If you start from any point on the torus shape you can go in one direction and make a circle that loops around the doughnut’s central hole. You can go the perpendicular direction and make a circle that brushes up against but doesn’t go around the central hole. There’s some neat topology in it.

There’s also video games in it. The topology of this is just like old-fashioned video games where if you go off the edge of the screen to the right you come back around on the left, and if you go off the top you come back from the bottom. (And if you go off to the left you come back around the right, and off the bottom you come back to the top.) To go from the flat screen to the surface of a doughnut requires imagining some stretching and scrunching up of the surface, but that’s all right. (OK, in an old video game it was a kind-of flat screen.) We can imagine a nice flexible screen that just behaves.

This is a common trick to deal with boundaries. (I first wrote “to avoid having to deal with boundaries”. But this is dealing with them, by a method that often makes sense.) You just make each boundary match up with a logical other boundary. It’s not just useful in video games. Often we’ll want to study some phenomenon where the current state of things depends on the immediate neighborhood, but it’s hard to say what a logical boundary ought to be. This particularly comes up if we want to model an infinitely large surface without dealing with infinitely large things. The trick will turn up a lot in numerical simulations for that reason. (In that case, we’re in truth working with a numerical approximation of T, but that’ll be close enough.)

Tn, meanwhile, is a vector of things, each of which is a point on a torus. It’s akin to Rn or S2 x n. They’re ordered sets of things that are themselves things. There can be as many as you like. n, here, is whatever positive whole number you need.

You might wonder how big the doughnut is. When we talked about the surface of the sphere, S2, or the surface and interior, B3, we figured on a sphere with radius of 1 unless we heard otherwise. Toruses would seem to have two parameters. There’s how big the outer diameter is and how big the inner diameter is. Which do we pick?

We don’t actually care. It’s much the way we can talk about a point on the surface of a planet by the latitude and longitude of the point, and never care about how big the planet is. We can describe a point on the surface of the torus without needing to refer to how big the whole shape is or how big the hole in the middle is. A popular scheme to describe points is one that looks a lot like latitude and longitude.

Imagine the torus sitting as flat as it gets on the table. Pick a point that you find interesting.

We use some reference point that’s as good as an equator and a prime meridian. One coordinate is the angle you make going horizontally, possibly around the hole in the middle, from the reference point to the point we’re interested in. The other coordinate is the angle you make vertically, going in a loop that doesn’t go around the hole in the middle, from the reference point to the point we’re interested in. The reference point has coordinates 0, 0, as it must. If this sounds confusing it’s because I’m not using a picture. I thought making some pictures would be too much work. I’m a fool. But if you think of real torus-shaped objects it’ll come to you.

In this scheme the coordinates are both angles. Normal people would measure that in degrees, from 0 to 360, or maybe from -180 to 180. Mathematicians would measure as radians, from 0 to 2π, or from -π to +π. Whatever it is, it’s the same as the coordinates of a point on the edge of the circle, what we called S1 a few essays back. So it’s fair to say you can think of T as S1 x S1, an ordered set of points on circles.

I’ve written of these toruses as three-dimensional things. Well, two dimensional-surfaces wrapped up to suggest three-dimensional objects. You don’t have to stick with these dimensions if you don’t want or if your problem needs something else. You can make a torus that’s a three-dimensional shape in four dimensions. For me that’s easiest to imagine as a cube where the left edge and the right edge loop back and meet up, the lower and the upper edges meet up, and the front and the back edges meet up. This works well to model an infinitely large space with a nice and small block.

I like to think I can imagine a four-dimensional doughnut where every cross-section is a sphere. I may be kidding myself. There could also be a five-dimensional torus and you’re on your own working that out, or working out what to do with it.

I’m not sure there is a common standard notation for that, though. Probably the mathematician wanting to make clear she’s working with a torus in four dimensions just says so in text, and trusts that the context of her mathematics makes it clear this is no ordinary torus.

I’ve also written of these toruses as circular, as rounded shapes. That’s the most familiar torus. It’s a doughnut shape, or an O-ring shape, or an inner tube’s shape. It’s the shape you produce by taking a circle and looping it around an axis not on the ring. That’s common and that’s usually all we need.

But if you need some other torus, produced by rotating some other shape around an axis not inside it, go ahead. You’ll need to make clear what that original shape, the generator, is. You’ve seen examples of this in, for example, the washers that fit around nuts and bolts. They’re typically rectangles in cross-section. Or you might have seen that image of someone who fit together a couple dozen iMac boxes to make a giant wheel. I don’t know why you would need this, but it’s your problem, not mine. If these shapes are useful for your work, by all means, use them.

I’m not sure there is a standard notation for that sort of shape. My hunch is to say you’d define your generating shape and give it a name such as A or D. Then name the torus based on that as T(A) or T(D). But I would recommend spelling it out in text before you start using symbols like this.

• #### howardat58 5:16 pm on Thursday, 14 January, 2016 Permalink | Reply

And now for the Klein bottle!

Like

• #### Joseph Nebus 3:52 am on Saturday, 16 January, 2016 Permalink | Reply

I’ve wanted to do that but it’s so hard to say what to keep a Klein bottle in.

Like

T superscript n is a standard notation for the n-dimensional torus. like the post.

Liked by 1 person

• #### Joseph Nebus 3:54 am on Saturday, 16 January, 2016 Permalink | Reply

Thank you, I’m glad to have an independent source saying so.

Liked by 1 person

## Reading the Comics, December 5, 2015: Awkward Break Edition

I confess I’m dissatisfied with this batch of Reading the Comics posts. I like having something like six to eight comics for one of these roundups. But there was this small flood of mathematically-themed comics on the 6th of December. I could either make do with a slightly short edition, or have an overstuffed edition. I suppose it’s possible to split one day’s comics across two Reading the Comics posts, but that’s crazy talk. So, a short edition today.

Jef Mallett’s Frazz for the 4th of December was part of a series in which Caulfield resists learning about reciprocals. The 4th offers a fair example of the story. At heart the joke is just the student-resisting-class, or student-resisting-story-problems. It certainly reflects a lack of motivation to learn what they are.

We use reciprocals most often to write division problems as multiplication. “a ÷ b” is the same as “a times the reciprocal of b”. But where do we get the reciprocal of b from? … Well, we can say it’s the multiplicative inverse of b. That is, it’s whatever number you have to multiply ‘b’ by in order to get ‘1’. But we’re almost surely going to find that taking 1 and dividing it by b. So we’ve swapped out one division problem for a slightly different one. This doesn’t seem to be getting us anywhere.

But we have gotten a new idea. If we can define the multiplication of things, we might be able to get division for almost free. Could we divide one matrix by another? We can certainly multiply a matrix by the inverse of another. (There are complications at work here. We’ll save them for another time.) A lot of sets allow us to define things that make sense as addition and multiplication. And if we can define a complicated operation in terms of addition and multiplication … If we follow this path, we get to do things like define the cosine of a matrix. Then we just have to figure out why we’d want have a cosine of a matrix.

There’s a simpler practical use of reciprocals. This relates to numerical mathematics, computer work. Computer chips do addition (and subtraction) really fast. They do multiplication a little slower. They do division a lot slower. Division is harder than multiplication, as anyone who’s done both knows. However, dividing by (say) 4 is the same thing as multiplying by 0.25. So if you know you need to divide by a number a lot, then it might make for a faster program to change division into multiplication by a reciprocal. You have to work out the reciprocal, but if you only have to do that once instead of many times over, this might make for faster code. Reciprocals are one of the tools we can use to change a mathematical process into something faster.

(In practice, you should never do this. You have a compiler that does this, and you should let it do its work. But it’s enlightening to know these are the sorts of things your compiler is looking for when it turns your code into something the computer does. And looking for ways to do the same work in less time is a noble side of mathematics.)

Charles Schulz’s Peanuts for the 4th of December (originally from 1968, on the same day) sees Peppermint Patty’s education crash against a word problem. It’s another problem in motivating a student to do a word problem. I admit when I was a kid I’d have been enchanted by this puzzle. But I was a weird one.

Dave Coverly’s Speed Bump for the 4th of December is a mathematics-symbols joke as applied to toast. I think you could probably actually sell those. At least the greater-than and the less-than signs. The approximately-equal-to signs would be hard to use. And people would think they were for bacon anyway.

Ruben Bolling’s Super-Fun-Pak Comix for the 4th of December showcases Young Albert Einstein. That counts as mathematical content, doesn’t it? The strip does make me wonder if they’re still selling music CDs and other stuff for infant or even prenatal development. I’m skeptical that they ever did any good, but it isn’t a field I’ve studied.

Bill Whitehead’s Free Range for the 5th of December uses a blackboard full of mathematical and semi-mathematical symbols to denote “stuff too complicated to understand”. The symbols don’t parse as anything. It is authentic to mathematical work to sometimes skip writing all the details of a thing and write in instead a few words describing it. Or to put in an abbreviation for the thing. That often gets circled or boxed or in some way marked off. That keeps us from later on mistaking, say, “MUB” as the product of M and U and B, whatever that would mean. Then we just have to remember we meant “minimum upper bound” by that.

## Reading the Comics, July 29, 2015: Not Entirely Reruns Edition

Zach Weinersmith’s Saturday Morning Breakfast Cereal (July 25) gets its scheduled appearance here with a properly formed Venn Diagram joke. I’m unqualified to speak for rap musicians. When mathematicians speak of something being “for reals” they mean they’re speaking about a variable that might be any of the real numbers. This is as opposed to limiting the variable to being some rational or irrational number, or being a whole number. It’s also as opposed to letting the variable be some complex-valued number, or some more exotic kind of number. It’s a way of saying what kind of thing we want to find true statements about.

I don’t know when the Saturday Morning Breakfast Cereal first ran, but I know I’ve seen it appear in my Twitter feed. I believe all the Gocomics.com postings of this strip are reruns, but I haven’t read the strip long enough to say.

Steve Sicula’s Home And Away (July 26) is built on the joke of kids wise to mathematics during summer vacation. I don’t think this is a rerun, although we’ve seen the joke this summer before.

Daniel Beyer’s Offbeat Comics for the 27th of July, 2015.

Daniel Beyer’s Offbeat Comics (July 27) depicts an angel with a square halo because “I was good2.” The association between squaring a number and squares goes back a long time. Well, it’s right there in the name, isn’t it? Florian Cajori’s A History Of Mathematical Notations cites the term “latus” and the abbreviation “l” to represent the side of a square being used by the Roman surveyor Junius Nipsus in the second century; for centuries this would be as good a term as anyone had for the thing to be calculated. (Res, meaning “thing”, was also popular.) Once you’ve taken the idea of calculating based on the length of a square, the jump to “square” for “length times itself” seems like a tiny one. But Cajori doesn’t seem to have examples of that being written until the 16th century.

The square of the quantity you’re interested in might be written q, for quadratus. The cube would be c, for cubus. The fourth power would be b or bq, for biquadratus, and so on. This is tolerable if you only have to work with a single unknown quantity, but the notation turns into gibberish the moment you want two variables in the mix. So it collapsed in the 17th century, replaced by the familiar x2 and x3 and so on. Many authors developed notations close to this: James Hume would write xii or xiii; Pierre Hérigone x2 or x3, all in one line. Rene Descartes would write x2 or x3 or so, and many, many followed him. Still, quite a few people — including Rene Descartes, Isaac Newton, and even as late a figure as Carl Gauss, in the early 19th century — would resist “x2”. They’d prefer “xx”. Gauss defended this on the grounds that “x2” takes up just as much space as “xx” and so fails the biggest point of having notation.

Corey Pandolph’s Toby, Robot Satan (July 27, rerun) uses sudoku as an example of the logic and reasoning problems that one would expect a robot should be able to do. It is weird to encounter one that’s helpless before them.

Cory Thomas’s Watch Your Head (July 27, rerun from 2007) mentions “Chebyshev grids” and “infinite boundaries” as things someone doing mathematics on the computer would do. And it does so correctly. Differential equations describe how things change on some domain over space and time. They can be very hard to solve exactly, but can be put on the computer very well. For this, we pick a representative set of points which we call a mesh. And we find an approximate representation of the original differential question, which we call a discretization or a difference equation. We can then solve this difference equation on the mesh, and if we’ve done our work right, this approximation will let us get a good estimate for the solution to the original problem over the whole original domain.

A Chebyshev grid is a particular arrangement of mesh points. It’s not uniform; it tends to clump up, becoming more common near the ends of the boundary. This is useful if you have reason to expect that the boundaries are more interesting than the middle of the domain. There’s no sense wasting good computing power calculating boring stuff. The mesh is named for Pafnuty Chebyshev, a 19th Century Russian mathematician whose name is all over mathematics. Unfortunately since he was a 19th Century Russian mathematician, his name is transcribed into English all sorts of ways. Chebyshev seems to be most common today, though Tchebychev used to be quite popular, which is why polynomials of his might be abbreviated as T. There are many alternatives.

Ah, but how do you represent infinite boundaries with the finitely many points of any calculatable mesh? There are many approaches. One is to just draw a really wide mesh and trust that all the action is happening near the center so omitting the very farthest things doesn’t hurt too much. Or you might figure what the average of things far away is, and make a finite boundary that has whatever that value is. Another approach is to make the boundaries repeating: go far enough to the right and you loop back around to the left, go far enough up and you loop back around to down. Another approach is to create a mesh that is bundled up tight around the center, but that has points which do represent going off very, very far, maybe in principle infinitely far away. You’re allowed to create meshes that don’t space points uniformly, and that even move points as you compute. That’s harder work, but it’s legitimate numerical mathematics.

So, the mathematical work being described here is — so far as described — legitimate. I’m not competent to speak about the monkey side of the research.

Greg Evans’s Luann Againn (July 29; rerun from July 29, 1987) name-drops the Law of Averages. There are actually multiple Laws of Averages, with slightly different assumptions and implications, but they all come to about the same meaning. You can expect that if some experiment is run repeatedly, the average value of the experiments will be close to the true value of whatever you’re measuring. An important step in proving this law was done by Pafnuty Chebyshev.

• #### sheldonk2014 1:26 am on Thursday, 30 July, 2015 Permalink | Reply

Bugs Bunny was 75 yesterday
Bob Clampett was the developer,also tweety and porky pig
Mel Blamc was the voice

Like

• #### Joseph Nebus 5:47 am on Sunday, 2 August, 2015 Permalink | Reply

That’s … actually kind of complicated. There are a lot of characters who are more or less Bugs Bunny, with the character really coming together around 1938. It’s a good lesson for people who want to study history, really. You have to figure out what you mean by ‘the first Bugs Bunny cartoon’ and defend your choice to say anything about his origins.

Like

## A Summer 2015 Mathematics A to Z Roundup

Since I’ve run out of letters there’s little dignified to do except end the Summer 2015 Mathematics A to Z. I’m still organizing my thoughts about the experience. I’m quite glad to have done it, though.

For the sake of good organization, here’s the set of pages that this project’s seen created:

## z-transform.

The z-transform comes to us from signal processing. The signal we take to be a sequence of numbers, all representing something sampled at uniformly spaced times. The temperature at noon. The power being used, second-by-second. The number of customers in the store, once a month. Anything. The sequence of numbers we take to stretch back into the infinitely great past, and to stretch forward into the infinitely distant future. If it doesn’t, then we pad the sequence with zeroes, or some other safe number that we know means “nothing”. (That’s another classic mathematician’s trick.)

It’s convenient to have a name for this sequence. “a” is a good one. The different sampled values are denoted by an index. a0 represents whatever value we have at the “start” of the sample. That might represent the present. That might represent where sampling began. That might represent just some convenient reference point. It’s the equivalent of mileage maker zero; we have to have something be the start.

a1, a2, a3, and so on are the first, second, third, and so on samples after the reference start. a-1, a-2, a-3, and so on are the first, second, third, and so on samples from before the reference start. That might be the last couple of values before the present.

So for example, suppose the temperatures the last several days were 77, 81, 84, 82, 78. Then we would probably represent this as a-4 = 77, a-3 = 81, a-2 = 84, a-1 = 82, a0 = 78. We’ll hope this is Fahrenheit or that we are remotely sensing a temperature.

The z-transform of a sequence of numbers is something that looks a lot like a polynomial, based on these numbers. For this five-day temperature sequence the z-transform would be the polynomial $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0$. (z1 is the same as z. z0 is the same as the number “1”. I wrote it this way to make the pattern more clear.)

I would not be surprised if you protested that this doesn’t merely look like a polynomial but actually is one. You’re right, of course, for this set, where all our samples are from negative (and zero) indices. If we had positive indices then we’d lose the right to call the transform a polynomial. Suppose we trust our weather forecaster completely, and add in a1 = 83 and a2 = 76. Then the z-transform for this set of data would be $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2$. You’d probably agree that’s not a polynomial, although it looks a lot like one.

The use of z for these polynomials is basically arbitrary. The main reason to use z instead of x is that we can learn interesting things if we imagine letting z be a complex-valued number. And z carries connotations of “a possibly complex-valued number”, especially if it’s used in ways that suggest we aren’t looking at coordinates in space. It’s not that there’s anything in the symbol x that refuses the possibility of it being complex-valued. It’s just that z appears so often in the study of complex-valued numbers that it reminds a mathematician to think of them.

A sound question you might have is: why do this? And there’s not much advantage in going from a list of temperatures “77, 81, 84, 81, 78, 83, 76” over to a polynomial-like expression $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2$.

Where this starts to get useful is when we have an infinitely long sequence of numbers to work with. Yes, it does too. It will often turn out that an interesting sequence transforms into a polynomial that itself is equivalent to some easy-to-work-with function. My little temperature example there won’t do it, no. But consider the sequence that’s zero for all negative indices, and 1 for the zero index and all positive indices. This gives us the polynomial-like structure $\cdots + 0z^2 + 0z^1 + 1 + 1\left(\frac{1}{z}\right)^1 + 1\left(\frac{1}{z}\right)^2 + 1\left(\frac{1}{z}\right)^3 + 1\left(\frac{1}{z}\right)^4 + \cdots$. And that turns out to be the same as $1 \div \left(1 - \left(\frac{1}{z}\right)\right)$. That’s much shorter to write down, at least.

Probably you’ll grant that, but still wonder what the point of doing that is. Remember that we started by thinking of signal processing. A processed signal is a matter of transforming your initial signal. By this we mean multiplying your original signal by something, or adding something to it. For example, suppose we want a five-day running average temperature. This we can find by taking one-fifth today’s temperature, a0, and adding to that one-fifth of yesterday’s temperature, a-1, and one-fifth of the day before’s temperature a-2, and one-fifth a-3, and one-fifth a-4.

The effect of processing a signal is equivalent to manipulating its z-transform. By studying properties of the z-transform, such as where its values are zero or where they are imaginary or where they are undefined, we learn things about what the processing is like. We can tell whether the processing is stable — does it keep a small error in the original signal small, or does it magnify it? Does it serve to amplify parts of the signal and not others? Does it dampen unwanted parts of the signal while keeping the main intact?

We can understand how data will be changed by understanding the z-transform of the way we manipulate it. That z-transform turns a signal-processing idea into a complex-valued function. And we have a lot of tools for studying complex-valued functions. So we become able to say a lot about the processing. And that is what the z-transform gets us.

• #### sheldonk2014 4:45 pm on Wednesday, 22 July, 2015 Permalink | Reply

Do you go to that pinball place in New Jersey

Like

• #### Joseph Nebus 1:46 am on Thursday, 23 July, 2015 Permalink | Reply

When I’m able to, yes! Fortunately work gives me occasional chances to revisit my ancestral homeland and from there it’s a quite reasonable drive to Asbury Park and the Silverball Museum. It’s a great spot and I recommend it highly.

There’s apparently also a retro arcade in Redbank, with a dozen or so pinball machines and a fair number of old video games. I’ve not been there yet, though.

Like

• #### howardat58 2:18 am on Thursday, 23 July, 2015 Permalink | Reply

Here is a bit more.

z is used in dealing with recurrence relations and their active form, with input as well, in the form of “z transfer function:
a(n) is the input at time n, u(n) is the output at time n, these can be viewed as sequences
u(n+1) = u(n) + a(n+1) represents the integral/accumulation/sum of series for the input process
z is considered as an operator which moves the whole sequence back one step,
Applied to the sequence equation shown you get u(n+1) = zu(n),
and the equation becomes
zu(n) = u(n) + za(n)
Now since everything has (n) we don’t need it, and get
zu = u + za
Solving for u gives
u = z/(z-1)a
which describes the behaviour of the output for a given sequence of inputs
z/(z-1) is called the transfer function of the input/output system
and in this case of summation or integration the expression z/(z-1) represents the process of adding up the terms of the sequence.
One nice thing is that if you do all of this for the successive differences process
u(n+1) = a(n+1) – a(n)
you get the transfer function (z-1)/z, the discrete differentiation process.

Liked by 1 person

• #### Joseph Nebus 2:11 pm on Saturday, 25 July, 2015 Permalink | Reply

That’s a solid example of using these ideas. May I bump it up to a main post in the next couple days so that (hopefully) more people catch it?

Like

## Well-Posed Problem.

This is another mathematical term almost explained by what the words mean in English. Probably you’d guess a well-posed problem to be a question whose answer you can successfully find. This also implies that there is an answer, and that it can be found by some method other than guessing luckily.

Mathematicians demand three things of a problem to call it “well-posed”. The first is that a solution exists. The second is that a solution has to be unique. It’s imaginable there might be several answers that answer a problem. In that case we weren’t specific enough about what we’re looking for. Or we should have been looking for a set of answers instead of a single answer.

The third requirement takes some time to understand. It’s that the solution has to vary continuously with the initial conditions. That is, suppose we started with a slightly different problem. If the answer would look about the same, then the problem was well-posed to begin with. Suppose we’re looking at the problem of how a block of ice gets melted by a heater set in its center. The way that melts won’t change much if the heater is a little bit hotter, or if it’s moved a little bit off center. This heating problem is well-posed.

There are problems that don’t have this continuous variation, though. Typically these are “inverse problems”. That is, they’re problems in which you look at the outcome of something and try to say what caused it. That would be looking at the puddle of melted water and the heater and trying to say what the original block of ice looked like. There are a lot of blocks of ice that all look about the same once melted, and there’s no way of telling which was the one you started with.

You might think of these conditions as “there’s an answer, there’s only one answer, and you can find it”. That’s good enough as a memory aid, but it isn’t quite so. A problem’s solution might have this continuous variation, but still be “numerically unstable”. This is a difficulty you can run across when you try doing calculations on a computer.

You know the thing where on a calculator you type in 1 / 3 and get back 0.333333? And you multiply that by three and get 0.999999 instead of exactly 1? That’s the thing that underlies numerical instability. We want to work with numbers, but the calculator or computer will let us work with only an approximation to them. 0.333333 is close to 1/3, but isn’t exactly that.

For many calculations the difference doesn’t matter. 0.999999 is really quite close to 1. If you lost 0.000001 parts of every dollar you earned there’s a fine chance you’d never even notice. But in some calculations, numerically unstable ones, that difference matters. It gets magnified until the error created by the difference between the number you want and the number you can calculate with is too big to ignore. In that case we call the calculation we’re doing “ill-conditioned”.

And it’s possible for a problem to be well-posed but ill-conditioned. This is annoying and is why numerical mathematicians earn the big money, or will tell you they should. Trying to calculate the answer will be so likely to give something meaningless that we can’t trust the work that’s done. But often it’s possible to rework a calculation into something equivalent but well-conditioned. And a well-posed, well-conditioned problem is great. Not only can we find its solution, but we can usually have a computer do the calculations, and that’s a great breakthrough.

• #### rennydiokno2015 8:11 am on Thursday, 16 July, 2015 Permalink | Reply

Reblogged this on My Blog News.

Like

## Step.

On occasion a friend or relative who’s got schoolkids asks me how horrified I am by some bit of Common Core mathematics. This is a good chance for me to disappoint the friend or relative. Usually I’m just sincerely not horrified. Much of what raises horror is students being asked to estimate and approximate answers. This is instead of calculating the answer directly. But I like estimation and approximation. If I want an exact answer I’ll do better to use a calculator. What I need is assurance the thing I’m calculating can sensibly be the thing I want to know. Nearly all my feats of mental arithmetic amount to making an estimate. If I must I improve it until someone’s impressed.

The other horror-raising examples I get amount to “look at how many steps it takes to do this simple problem!” The ones that cross my desk are usually subtraction problems. Someone’s offended the student is told to work out 107 minus 18 (say) by counting by ones from 18 up to 20, then by tens from 20 up to 100, and then by ones again up to 107. And this when they could just write one number above another and do some borrowing and get 89 right away, no steps needed. Assuring my acquaintance that the other method is really just the way you might count change, and that I do subtraction that way much of the time, doesn’t change minds. (More often I do that to double-check my answer. This raises the question of why I don’t do it that way the first time.) Though it does make the acquaintance conclude I’m some crazy person with no idea how to teach kids.

That’s probably fair. I’ve never taught elementary school students, and haven’t any training for it. I’ve only taught college students. For that my entire training consisted of a single one-credit course my first semester as a Teaching Assistant, plus whatever I happened to pick up while TAing for professors who wanted me to sit in on lecture. From the first I learned there is absolutely no point to saying anything while I face the chalkboard because it will be unheard except by the board, which has already been through this class forty times. From the second I learned to toss hard candies as reward to anyone who would say anything, anything, in class. Both are timeless pedagogical truths.

But the worry about the number of steps it takes to do some arithmetic calculation stays with me. After all, what is a step? How much work is it? How hard is a step?

I don’t think there is a concrete measure of hardness. I’m not sure there could be. If I needed to, I’d work out 107 minus 18 by noticing it’s just about 110 minus 20, so it’s got to be about 90, and a 7 minus 8 has to end in a 9 so the answer must be 89. How many steps was that? I guess there are maybe three thoughts involved there. But I don’t do that, at least not deliberately, when I look at the problem. 89 just appears, and if I stay interested in the question, the reasons why that’s right follow in short order. So how many steps did I take? Three? One?

On the other hand, I know that in elementary school I would have had to work it out by looking at 7 minus 8. And then I’d need to borrow from the tens column. And oh dear there’s a 0 to the left of the 7 so I have to borrow from the hundreds and … That’s the procedure as it was taught back then. Now, I liked that. I understood it. And I was taught with appeals to breaking dollars into dimes and pennies, which worked for my imagination. But it’s obviously a bunch of steps. How many? I’m not sure; probably around ten or so. And, if we’re being honest, borrowing from a zero in the tens column is a deeply weird thing to do. I can understand people freezing up rather than do that.

Similarly, I know that if I needed to differentiate the logarithm of the cosine of x, I would have the answer in a flash. It’d be at most one step. If I were still in high school, in my calculus class, I’d need longer. I’d struggle through the chain rule and some simplifications after that. Call it maybe four or five steps. If I were in elementary school I’d need infinitely many steps. I couldn’t even understand the problem except in the most vague, metaphoric way.

This leads me to my suggestion for what a “step” is, at least for problems you work out by hand. (Numerical computing has a more rigorous definition of a step; that’s when you do one of the numerical processing operations.) A step is “the most work you can do in your head without a significant chance of making a mistake”. I think that’s a definition that clarifies the problem of counting steps. It will be different for different people. It will be different for the same person, depending on how experienced she is. The steps a newcomer has to a subject are smaller than the ones an expert has. And it’s not just that newcomer takes more steps to get to the same conclusion than the expert does. The expert might imagine the problem breaks down into different steps from the ones a newcomer can do. Possibly the most important skill a teacher has is being able to work out what the steps the newcomer can take are. These will not always be what the expert thinks the smaller steps would be.

But what to do with problem-solving approaches that require lots of steps? And here I recommend one of the wisest pieces of advice I’ve ever run across. It’s from the 1954 Printer 1 & C United States Navy Training Course manual, NavPers 10458. I apologize if I’m citing it wrong, but I hope people can follow that to the exact document. I have it because I’m interested in Linotype operation is why. From page 308, the section “Don’t Overlook Instructions” in Chapter 7:

When starting on a new piece of copy, or “take” is it is called, be sure to read all instructions, such as the style and size of type, the measure to be set, whether it is to be leaded, indented, and so on.

Then go slowly. Try to develop even, rhythmic strokes, rather than quick, sporadic motions. Strive for accuracy rather than speed. Speed will come with practice.

As with Linotype operations, so it is with arithmetic. Be certain you are doing what you mean to do, and strive to do it accurately. I don’t know how many steps you need, but you probably won’t get a wrong answer if you take more than the minimum number of steps. If you take fewer steps than you need the results will be wretched. Speed will come with practice.

• #### howardat58 4:00 pm on Monday, 6 July, 2015 Permalink | Reply

The Common Core idea is that kids will “discover” ways of doing subtraction that are simple to understand, and the one you show is an example. What has happened is that the people who write the curricula have managed to turn this into “this is how we do subtraction”, so it becomes just another method, which can equally well be applied without knowing why it works. Pity really.

Like

• #### Joseph Nebus 5:41 pm on Tuesday, 7 July, 2015 Permalink | Reply

I hope it doesn’t disappoint you too much that I can sympathize with both sides in this dispute. That is, I see the value of having people discover methods to do something. But I also appreciate that sometimes you just need a process you can follow.

Generally, I’m fond of offering a couple techniques to doing anything. I tend to think several approaches makes it easier to understand what the others are doing. And sometimes people will struggle with one approach while really getting another, and they should be able to do problems the way they do best. Of course that has the drawback that someone who doesn’t know how to pick an approach can freeze up, unsure of what the “right” approach is and not confident in just taking any one they can do.

Like

• #### sarcasticgoat 4:00 pm on Monday, 6 July, 2015 Permalink | Reply

Is there such a thing as mathematical-dyslexia? Because I have it, if it exists.

Like

• #### Joseph Nebus 5:33 pm on Tuesday, 7 July, 2015 Permalink | Reply

There is debate about whether such a “dyscalculia” exists. My understanding is that it’s controversial whether it’s a consistent describable common problem, though. I mean, it’s easy to imagine people who are not able to consistently read symbols in the correct order.

But mathematics isn’t all about reading symbols. For example, a geometric proof might show that the area of a particular crescent moon-style-shape is the same as the area as this square, constructed from the crescent by straightedge and compass. Grant that someone can’t consistently tell the difference between 2038 and 2308 in an arithmetic problem; how would that even touch the mathematical reasoning involved in that?

And would arithmetic or geometric reasoning have anything to do with (say) thinking of what the eigenfunctions for a particular differential equations operator might be, or how they might resemble or differ from one another?

There might be. People are complicated things and it seems plausible that there are kinds of reasoning folks consistently can’t do. But isolating that as a consistent, describable, common problem that isn’t just related to unfamiliarity or inexperience seems hard to do. I suppose this is why they pay the experimental psychologists the big money.

Liked by 1 person

• #### sarcasticgoat 5:46 pm on Tuesday, 7 July, 2015 Permalink | Reply

I can read numbers properly, but simple sums like 78 minus 33 for example just floor me, and it takes me a very long time, maybe 30-45, up to a minute seconds to be able to do it!
It’s the bane of my life!

Like

• #### Joseph Nebus 5:45 am on Friday, 10 July, 2015 Permalink | Reply

Do you need to do it any faster? Getting the right answer — or an approximately right answer — is the important thing. Getting it fast is nice but not essential.

Do you ever try estimating or breaking up problems? For example, 78 minus 33 looks lousy to me because that 33 is unpleasant. 30 would be much nicer. If you took three off the 78 and three off the 33, though, you’d get the same difference. 78 minus three is 75, and 33 minus three is 30. So 78 minus 33 has to be the same number as 75 minus 30. And that’s an easier problem, since you just have to do 7 minus 3 and get 45.

Liked by 1 person

• #### sarcasticgoat 7:12 am on Friday, 10 July, 2015 Permalink | Reply

I suppose I don’t need to be able to answer sums quickly, it’s just that I don’t know anyone else who has such ease with language and words to have such a hard time with numbers, hence the possibility of the existence of some kind of condition………….like how dyslexics CAN spell and read, it just takes them a lot of work.
:)

Like

• #### Joseph Nebus 4:10 am on Tuesday, 14 July, 2015 Permalink | Reply

It’s possible that there is a condition that makes calculation harder than it needs to be. But it might also be that you just never happened to find calculating fun enough that you wanted to do it regularly, so you still do it as an inexperienced person.

Are there things you like calculating, or logic puzzles that you enjoy doing?

Liked by 1 person

• #### sarcasticgoat 8:53 am on Tuesday, 14 July, 2015 Permalink | Reply

Does Tetris count?
Other than that, not really, I play Scrabble a lot, which has numbers involved, but is mostly word based.
I think you’re right about me being inexperienced at it, as soon as we were allowed to use calculators in high school, I pretty much gave up on the pure number stuff.

Like

• #### Joseph Nebus 4:14 pm on Wednesday, 15 July, 2015 Permalink | Reply

There is neat stuff to be said about Tetris, mathematically. The obvious thing is the study of tilings and symmetries: how many different ways can you cover a flat surface using only these blocks, or using some subset of them, or using the blocks in only certain directions? (That L-shaped figure gets a lot more difficult to use if you can’t rotate it to fit.) And there’s neat other puzzles; for example, suppose you dropped a bunch of Tetris pieces at random into the tube. Surely you’d get at least a handful of completed lines by luck; how many? How much empty space would you expect to leave behind? What are the longest stretches of empty spaces you’d expect to leave behind?

Granted, this doesn’t help you calculating much of anything. But they’re neat questions to ask and to try answering. (I admit I don’t have much of an answer for any of them, although I can imagine working out estimates by computer simulation.)

Liked by 1 person

• #### sarcasticgoat 4:25 pm on Wednesday, 15 July, 2015 Permalink | Reply

That’s exactly the sort of thing about mathematics I like thinking about in depth; the theoretical stuff that doesn’t involve numbers and sums and stuff like that to the layperson.
One thing I learned from a show I watched a few years ago;
if you bounce a chrome ball off of a big ball of quartz, the vibrations that occur, mimic almost exactly, in graph form, the same vibrations that the number Pi has in graph form……….that blew my mind, and to my knowledge, no one knows why that happens. Amazing!
Pi is an irrational number with no real symmetry or order, yet it’s replicated in a huge number of natural phenomena; even London taxis tend towards following patterns that mimic Pi (something like that)……it’s just unreal!

Like

• #### Michelle H 1:05 pm on Friday, 10 July, 2015 Permalink | Reply

I read this post Wednesday via my phone (if WordPress doesn’t count mobile devices in your stats, I haven’t helped you much of late, I’m afraid!). I wanted to sit down at the computer to comment, and am finally here. The new math program rolled out as my children started school. I didn’t like it very much as it came in, and my husband (who works in geomatic surveys) complained that the new program wouldn’t give a solid foundation for a career in applied mathematics. I was working in an elementary school as the program rolled out. I guess I got to see school mathematics transformation through many perspectives, as parent, educator, and from my husband I saw a professional perspective.

I did like how the process of thinking about mathematics forced a change in the schools. As a kid, I was a bit of a self-learner, and I often thought about math in a different way than most of my teachers. I noticed this wasn’t always a good thing for me, as it sometimes landed me in a bit of conflict with a weaker teacher if my questions seemed too challenging. The new math offers a greater flexibility for differing processes, I think, and I would probably have loved it as a child. I did notice that the new program also caused considerable stress for teachers without strong math skills. In three schools over eight years I saw two teachers give it up and request a math specialist to take over their math classes, which I think resulted in a better experience for everyone. As a parent, I’m glad that this change has forced a review within our education systems and has required teachers to have stronger skills in their subject areas; teaching is not just about teaching, after all! One has to know the material well enough before sharing that information.

As a former employee in the education system, one of the greater struggles with the new math was how unwieldy it is for the large classrooms we have (usually 26 to 30 students). Distributing the manipulatives, for example, requires time and space that is simply not available. Storing the bins in a classroom that was at maximum capacity is another problem. We’d often move the kids throughout the school, using any available space we could find. The time we spent simply managing the lessons cut into learning time, and left us struggling to keep up. In those first few years as the process worked itself out, we always worried that we wouldn’t cover all the material in a given year. Most of the teachers adapted by borrowing time from other subjects whenever possible.

As for the professional perspective, my husband finds that new college grads are not mathematically prepared for his field, but the new staff he receives now were students in the old system. I’m not sure if it could possibly be any worse, as very few of them seem to be mentally engaged in their work processes. Relying on the computer system to step through their work, it isn’t always a given that they’ll notice illogical results in their data. That, I’m afraid, is not necessarily a problem that teachers or curriculums are responsible for solving.

Like

• #### Joseph Nebus 4:42 am on Tuesday, 14 July, 2015 Permalink | Reply

I want to thank you, so much, for such a long and detailed comment about this. As I say, it’s a level of mathematics teaching I’ve never done myself, but that friends often expect I should know something about, and I’m embarrassed that my perspective isn’t as informed as friends would like.

I had suspected that a large part of what went wrong with the New Math, and that is frustrating Common Core mathematics, is that there’s a simultaneous drive to teach with as few people as possible. Teaching something new, or in a new fashion, seems to me something that has to be practiced. A performer tries out new material on small audiences several times and several different ways before putting it into the big show; how can we reasonably expect teaching to be different?

I suspect that with enough time spent teaching smaller classes, New Math or other experimental methods might be developed that could be given out to big, 30-or-more student classes. But that does take time and money and if we could provide enough teachers to put students in (say) ten-person classes, why would we stop?

I’m not well enough in touch with current college graduates, or teaching staff, to have a sense for how well prepared they are to calculate. Being able to work out exact results is nice but not that essential. Being able to estimate and tell whether an answer could plausibly be right is necessary, though.

(And I get that wrong myself, sometimes, usually when I don’t stop to think about whether a particular number is credible. One that embarrassed me recently was a line in … I think it was Danny Danziger and John Gillingham’s 1215: Year of the Magna Carta, which gave an estimate of how many stately families in England went extinct each year. The rate was so high that it would imply nearly no families could make it through a decade, much less the century. I didn’t notice until someone called me on the implication.)

Liked by 1 person

• #### Michelle H 12:45 am on Wednesday, 15 July, 2015 Permalink | Reply

I suspect that the parents who suggest these learning tasks are irrelevant to adult work might not know for themselves what is causing their frustration. I recently read about the European reaction to Arabrian numerals and how some cities banned the new methods as the Fibonacci ‘text books’ began to circulate. Yet, here we are. ;)

On another note, your humble confession gives me relief; I’ve been reviewing high school math to prepare for starting a new degree in the fall (in math) and sometimes I startle myself with the silly errors I can make in the calculations. I suppose mathematics is enough like writing in that a good proof reading is essential ( and sometimes best done by someone else).

Like

• #### Joseph Nebus 4:26 pm on Wednesday, 15 July, 2015 Permalink | Reply

I hadn’t thought of the similarity between resistance to new teaching methods and the suspicion that Arabic numerals faced when they were introduced. (They were suspected of being ways that merchants and bankers could cheat people, since whereas anyone could follow addition and subtraction with Roman numerals, Arabic numerals inflicted all these bizarre new symbols and rules that nobody could follow.)

And, yeah, everybody makes silly errors in calculations. Back in grad school one of my fellow TAs was driven to madness by the number of students who on a test kept reducing 1002 to 2. I would like to laugh more at that, but as I recall, that was the same month I was stumped for a week on a differential equations assignment because I kept writing the derivative of “ex” as “x ex”, which is possibly even dumber than turning 100*100 into 10.

(The derivative of “ex” is “ex” again. Indeed, “e” is pretty much defined so that the derivative of “ex” is “ex” again. All I can say is I eventually caught my mistake.)

Liked by 1 person

## Reading the Comics, July 4, 2015: Symbolic Curiosities Edition

Comic Strip Master Command was pretty kind to me this week, and didn’t overload me with too many comics when my computer problems were the most time-demanding. You’ve seen how bad that is by how long it’s taken me to get to answering people’s comments. But they have kept publishing mathematical comic strips, and so I’m ready for another review. This time around a couple of the strips talk about the symbols of mathematics, so that’s enough of a hook for my titling needs.

Henry Scarpelli and Craig Boldman’s Archie for the 30th of June, 2015, although that’s a rerun.

Henry Scarpelli and Craig Boldman’s Archie (June 30, rerun) is about living with long odds. People react to very improbable events in strange ways. Moose is being maybe more consistent than normal for folks in figuring that if he’s going to be lucky enough to win a contest then he’s just lucky enough to be hit by a meteor too. (It feels like a lottery to me, although I guess Moose has to be too young to enter a lottery.) And I’m amused by the logic of someone’s behavior becoming funny because it is logically consistent.

Dave Blazek’s Loose Parts (June 30) shows the offices of Math, Inc. (I believe this is actually the Chicago division, not the main headquarters.) This is also a strip I could easily see happening in the real world. It’s not different in principle from clocks which put some arithmetic expression up for the hours, or those calendars which make a math puzzle out of the date.

• #### ivasallay 6:11 am on Monday, 6 July, 2015 Permalink | Reply

Location, location, location. I liked that one the best’

Like

• #### Joseph Nebus 5:22 pm on Tuesday, 7 July, 2015 Permalink | Reply

It’s a nice one since it carries some real truth to it.

Like

• #### elkement 3:10 pm on Monday, 6 July, 2015 Permalink | Reply

I like the one about the # sign. I recently learned that the dagger (which I only associated with the transpose of a matrix of operators in quantum mechanics) was used in in the old days to mark repetitions in a literary text.

Like

• #### Joseph Nebus 5:23 pm on Tuesday, 7 July, 2015 Permalink | Reply

Oh, that’s interesting. I hadn’t thought about where the dagger came from, beyond the set of weird things people sometimes use when they need multiple footnotes on a page and for some reason can’t use superscripted numbers.

Liked by 1 person

## Locus.

A locus is a collection of points that all satisfy some property. For example, the locus of points that are all equally distant from some center point is a circle. Or maybe it’ll be a sphere, or even a hypersphere. That depends whether we’re looking at points in a plane, in three-dimensional space, or something more. When we draw lines and parabolas and other figures like that in algebra we’re drawing locuses. Those locuses are the points that satisfy the property “the values of the coordinates of this point make that equation true”.

The idea is a bit different in connotation from “the curve of an equation”. We might not be talking about points that can be conveniently, or sensibly, described by an equation. We might want something like “the shape made by the reflection of this rectangle across this cylindrical mirror”. Or we might want “the points in space from which a space probe will crash into the moon, instead of crashing into Earth”. It’s convenient to have a shorthand way of talking about that idea. Using this word avoids necessarily tying ourselves to drawings or figures we might not be able to produce even in theory.

• #### Barb Knowles 1:02 am on Sunday, 21 June, 2015 Permalink | Reply

And a math neophyte like I can understand it better…

Like

• #### Joseph Nebus 2:27 am on Monday, 22 June, 2015 Permalink | Reply

Good! I hope it helps.

Liked by 2 people

## A Neat Accident

I wanted to point folks over to a blog post by Rick Wicklin, on the web site for SAS. It’s the company that makes, well, SAS, software designed for data management and analysis.

The incident behind this was an accident, as his daughter spilled a bottle of black nail polish, and it splattered on the wall in an interesting spiral. Dr Wicklin wondered if it might be a logarithmic spiral and gathered data to work out whether it might plausibly be. There’s a nice description for how to go from the messiness of a real world splatter to a clearly defined mathematical problem, and how to try fitting a curve to the messy reality of data.

Curve-fitting real-world data is a challenging field. Curves are always members of families, groups of curves that look similar. For example, circles may have any point as their center and have any radius. Lines may pass through any point you like and be as horizontal or vertical or diagonal as you like. (Yes, a straight line isn’t much of a curve, but it’s too wordy to talk of “line or curve fitting” if you don’t have to. In this context, a line is a kind of curve in the same way a square is a kind of parallelogram.) There are many, many more kinds of curves, parabolas and hyperbolas and cubics and quartics and trigonometric functions and, oh yes, we can add them together, or multiply them, or even compose them (anyone up for the sine of a logarithm?).

So you start with the kind of curve you think your data really is, and try to find the set of parameters that make the curve and the data look like they’re representations of the same thing. The drawing of your curve and the drawing of your data points will never exactly overlap, though. Your data, coming from the real world, will be messy. Some of the nail polish spots will be in the ‘wrong’ place, or it’ll be ambiguous what the ‘real’ location of a point should be. (After all, what is the real location of a spot? Its center? How do you know where the exact center is? What if the spot is a smeared raindrop-shape rather than a circle?)

It’s not just an artistic eye that judges whether the parameters you’ve picked are a good fit. We can quantify how “good” a fit the curve is to the data, and to find the parameters that make the best possible, or the best findable, fit. But there is still an artistic eye involved: there are infinitely many imaginable curves. If you start from the wrong kind of curve, you might get a tolerable fit. But it won’t give insight into the reasons the data looks like this, or what it might look like as more data comes in. Happily, computers make it easy to try out many different kinds of curves, but having a sense of what curves are plausible makes for better work.

## Error

This is one of my A to Z words that everyone knows. An error is some mistake, evidence of our human failings, to be minimized at all costs. That’s … well, it’s an attitude that doesn’t let you use error as a tool.

An error is the difference between what we would like to know and what we do know. Usually, what we would like to know is something hard to work out. Sometimes it requires complicated work. Sometimes it requires an infinite amount of work to get exactly right. Who has the time for that?

This is how we use errors. We look for methods that approximate the thing we want, and that estimate how much of an error that method makes. Usually, the method involves doing some basic step some large number of times. And usually, if we did the step more times, the estimate of the error we make will be smaller. My essay “Calculating Pi Less Terribly” shows an example of this. If we add together more terms from that Leibniz formula we get a running total that’s closer to the actual value of π.

• #### baffledbaboon 3:13 pm on Wednesday, 3 June, 2015 Permalink | Reply

Whenever I make an error – my partner likes to tell me that I “broke math”. This all stemming from the one time I was given all the steps to the problem and still got an answer that wasn’t even close.

Like

• #### Joseph Nebus 10:41 pm on Friday, 5 June, 2015 Permalink | Reply

Aw, that sort of thing happens to everybody. Mathematicians especially. There’s a bit of folklore that says to never give an arithmetic problem to a mathematician because even if you ever do get an answer from it, it won’t be anything near right.

Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r