## What I Learned Doing The A To Z Project

So now I’ve had the chance to rest a little and recover from the Summer 2015 Mathematics A To Z Project. I’d been inspired to it by Sue Archer and her Doorway Between Worlds blog. I had piggybacked on her discussing the word “into” with a description of its mathematical use.

The first thing I learned is that it’s easy to write several week’s worth of essays in a big session if I have a clear idea what they’re all to be about. That left me feeling good. I do worry when I go several days without anything fresh or more than the reblog of someone’s interesting pictures. I like sharing someone else’s interesting pictures too, mind you. I just know it’s not work on my part to share them. Also when I had to travel a while in May and June, and when my computer was out for repairs, I didn’t have to scramble to do anything.

Another is that I liked the format, which had me jumping around several concentrations of mathematics. It also had me jump from common enough levels into math-major stuff all the way to grad school stuff. I particularly liked trying to introduce graduate-level mathematics in tolerably clear English and in around a thousand words. Helping me out here was the Hemmingway Editor, which attempts to judge how complicated one’s writing is. It’s in favor of shorter, clearer sentences with fewer adverbs and no uses of the word “very”. I can’t agree with everything it judges. It’s a computer, after all. But writing about advanced subjects while watching how complicated my sentences came out has helped my prose style.

Something else I’ve learned from this is that there’s a taste for pop-mathematics about more advanced topics. It’s easy to suppose that people who never studied, or never liked studying, mathematics are most likely to read about the easy stuff. That’s probably not quite so. Probably what people really want is to feel like they’re being let in on the cool stuff. Mathematics has a lot of cool stuff. A lot of it requires a long running start, though. For example, I couldn’t talk about a ring until I’d described what a group was. So that essay felt like it was taking forever to get started while I wrote it. I don’t know how it felt to people reading it. The z-transform, similarly, has a lot that’s neat about it, but it took a while to get there. I hope it stayed promising long enough for people to stick through it.

My terror throughout writing all 26 entries was that I was about to say something really, obviously stupid, and that a flock of mathematicians would descend on me. Scorn and semiprofessional humiliation would follow. I’m still a bit worried, although nobody’s said anything too bad to me yet.

The project was quite good for my readership. Between the A to Z essays, Reading the Comics posts, occasional other essays, and reblogs, I went a solid thirty days with something new posted every day. That’s surely why June was my most-read month here ever. And why July, though having fewer posts, was still pretty well-read. I confess I’m disappointed and a bit surprised I never hit the “Freshly Pressed” lottery with any of these. But that’s just the natural envy any writer has. Everybody else always seems to be more successful.

I’d like to do a similar thematically linked project. I might in a few months do another A to Z. I’m open to other themes, though, and would love to hear suggestions.

• #### sheldonk2014 8:16 pm on Friday, 31 July, 2015 Permalink | Reply

Hey #s what gives

Like

• #### Sue Archer 12:32 am on Saturday, 1 August, 2015 Permalink | Reply

Glad to hear that your A to Z experience went well for you, Joseph! I’ve used the Hemingway app as well, and have found it to be a good quick check to see whether you’re getting too wordy for your audience. Congrats on getting through all those essays!

Like

• #### howardat58 7:15 pm on Saturday, 1 August, 2015 Permalink | Reply

A is for aliasing

Like

## Reading the Comics, July 29, 2015: Not Entirely Reruns Edition

Zach Weinersmith’s Saturday Morning Breakfast Cereal (July 25) gets its scheduled appearance here with a properly formed Venn Diagram joke. I’m unqualified to speak for rap musicians. When mathematicians speak of something being “for reals” they mean they’re speaking about a variable that might be any of the real numbers. This is as opposed to limiting the variable to being some rational or irrational number, or being a whole number. It’s also as opposed to letting the variable be some complex-valued number, or some more exotic kind of number. It’s a way of saying what kind of thing we want to find true statements about.

I don’t know when the Saturday Morning Breakfast Cereal first ran, but I know I’ve seen it appear in my Twitter feed. I believe all the Gocomics.com postings of this strip are reruns, but I haven’t read the strip long enough to say.

Steve Sicula’s Home And Away (July 26) is built on the joke of kids wise to mathematics during summer vacation. I don’t think this is a rerun, although we’ve seen the joke this summer before.

Daniel Beyer’s Offbeat Comics for the 27th of July, 2015.

Daniel Beyer’s Offbeat Comics (July 27) depicts an angel with a square halo because “I was good2.” The association between squaring a number and squares goes back a long time. Well, it’s right there in the name, isn’t it? Florian Cajori’s A History Of Mathematical Notations cites the term “latus” and the abbreviation “l” to represent the side of a square being used by the Roman surveyor Junius Nipsus in the second century; for centuries this would be as good a term as anyone had for the thing to be calculated. (Res, meaning “thing”, was also popular.) Once you’ve taken the idea of calculating based on the length of a square, the jump to “square” for “length times itself” seems like a tiny one. But Cajori doesn’t seem to have examples of that being written until the 16th century.

The square of the quantity you’re interested in might be written q, for quadratus. The cube would be c, for cubus. The fourth power would be b or bq, for biquadratus, and so on. This is tolerable if you only have to work with a single unknown quantity, but the notation turns into gibberish the moment you want two variables in the mix. So it collapsed in the 17th century, replaced by the familiar x2 and x3 and so on. Many authors developed notations close to this: James Hume would write xii or xiii; Pierre Hérigone x2 or x3, all in one line. Rene Descartes would write x2 or x3 or so, and many, many followed him. Still, quite a few people — including Rene Descartes, Isaac Newton, and even as late a figure as Carl Gauss, in the early 19th century — would resist “x2”. They’d prefer “xx”. Gauss defended this on the grounds that “x2” takes up just as much space as “xx” and so fails the biggest point of having notation.

Corey Pandolph’s Toby, Robot Satan (July 27, rerun) uses sudoku as an example of the logic and reasoning problems that one would expect a robot should be able to do. It is weird to encounter one that’s helpless before them.

Cory Thomas’s Watch Your Head (July 27, rerun from 2007) mentions “Chebyshev grids” and “infinite boundaries” as things someone doing mathematics on the computer would do. And it does so correctly. Differential equations describe how things change on some domain over space and time. They can be very hard to solve exactly, but can be put on the computer very well. For this, we pick a representative set of points which we call a mesh. And we find an approximate representation of the original differential question, which we call a discretization or a difference equation. We can then solve this difference equation on the mesh, and if we’ve done our work right, this approximation will let us get a good estimate for the solution to the original problem over the whole original domain.

A Chebyshev grid is a particular arrangement of mesh points. It’s not uniform; it tends to clump up, becoming more common near the ends of the boundary. This is useful if you have reason to expect that the boundaries are more interesting than the middle of the domain. There’s no sense wasting good computing power calculating boring stuff. The mesh is named for Pafnuty Chebyshev, a 19th Century Russian mathematician whose name is all over mathematics. Unfortunately since he was a 19th Century Russian mathematician, his name is transcribed into English all sorts of ways. Chebyshev seems to be most common today, though Tchebychev used to be quite popular, which is why polynomials of his might be abbreviated as T. There are many alternatives.

Ah, but how do you represent infinite boundaries with the finitely many points of any calculatable mesh? There are many approaches. One is to just draw a really wide mesh and trust that all the action is happening near the center so omitting the very farthest things doesn’t hurt too much. Or you might figure what the average of things far away is, and make a finite boundary that has whatever that value is. Another approach is to make the boundaries repeating: go far enough to the right and you loop back around to the left, go far enough up and you loop back around to down. Another approach is to create a mesh that is bundled up tight around the center, but that has points which do represent going off very, very far, maybe in principle infinitely far away. You’re allowed to create meshes that don’t space points uniformly, and that even move points as you compute. That’s harder work, but it’s legitimate numerical mathematics.

So, the mathematical work being described here is — so far as described — legitimate. I’m not competent to speak about the monkey side of the research.

Greg Evans’s Luann Againn (July 29; rerun from July 29, 1987) name-drops the Law of Averages. There are actually multiple Laws of Averages, with slightly different assumptions and implications, but they all come to about the same meaning. You can expect that if some experiment is run repeatedly, the average value of the experiments will be close to the true value of whatever you’re measuring. An important step in proving this law was done by Pafnuty Chebyshev.

• #### sheldonk2014 1:26 am on Thursday, 30 July, 2015 Permalink | Reply

Bugs Bunny was 75 yesterday
Bob Clampett was the developer,also tweety and porky pig
Mel Blamc was the voice

Like

## Lewis Carroll Tries Changing The Way You See Trigonometry

Today’s On This Day In Math tweet was well-timed. I’d recently read Robin Wilson’s Lewis Carroll In Numberland: His Fantastical Mathematical Logical Life. It’s a biography centered around Charles Dodgson’s mathematical work. It shouldn’t surprise you that he was fascinated with logic, and wrote texts — and logic games — that crackle with humor. People who write logic texts have a great advantage on other mathematicians (or philosophers). Almost any of their examples can be presented as a classically structured joke. Vector calculus isn’t so welcoming. But Carroll was good at logic-joke writing.

Developing good notation was one of Dodgson/Carroll’s ongoing efforts, though. I’m not aware of any of his symbols that have got general adoption. But he put forth some interesting symbols to denote the sine and cosine and other trigonometric functions. In 1861, the magazine The Athanaeum reviewed one of his books, with its new symbols for the basic trigonometric functions. (The link shows off all these symbols.) The reviewer was unconvinced, apparently.

I confess that I am, too, but mostly on typographical grounds. It is very easy to write or type out “sin θ” and get something that makes one think of the sine of angle θ. And I’m biased by familiarity, after all. But Carroll’s symbols have a certain appeal. I wonder if they would help people learning the functions keep straight what each one means.

The basic element of the symbols is a half-circle. The sine is denoted by the half-circle above the center, with a vertical line in the middle of that. So it looks a bit like an Art Deco ‘E’ fell over. The cosine is denoted by the half circle above the center, but with a horizontal line underneath. It’s as if someone started drawing Chad and got bored and wandered off. The tangent gets the same half-circle again, with a horizontal line on top of the arc, literally tangent to the circle.

There’s a subtle brilliance to this. One of the ordinary ways to think of trigonometric functions is to imagine a circle with radius 1 that’s centered on the origin. That is, its center has x-coordinate 0 and y-coordinate 0. And we imagine drawing the line that starts at the origin, and that is off at an angle θ from the positive x-axis. (That is, the line that starts at the origin and goes off to the right. That’s the direction where the x-coordinate of points is increasing and the y-coordinate is always zero.) (Yes, yes, these are line segments, or rays, rather than lines. Let it pass.)

The sine of the angle θ is also going to be the y-coordinate of the point where the line crosses the unit circle. That is, it’s the vertical coordinate of that point. So using a vertical line touching a semicircle to suggest the sine represents visually one thing that the sine means. And the cosine of the angle θ is going to be the x-coordinate of the point where the line crosses the unit circle. So representing the cosine with a horizontal line and a semicircle again underlines one of its meanings. And, for that matter, the line might serve as a reminder to someone that the sine of a right angle will be 1, while the cosine of an angle of zero is 1.

The tangent has a more abstract interpretation. But a line that comes up to and just touches a curve at a single point is, literally, a tangent line. This might not help one remember any useful values for the tangent. (That the tangent of zero is zero, the tangent of half a right angle is 1, the tangent of a right angle is undefined). But it’s still a guide to what things mean.

The cotangent is just the tangent upside-down. Literally; it’s the lower half of a circle, with a horizontal line touching it at its lowest point. That’s not too bad a symbol, actually. The cotangent of an angle is the reciprocal of the tangent of an angle. So making its symbol be the tangent flipped over is mnemonic.

The secant and cosecant are worse symbols, it must be admitted. The secant of an angle is the reciprocal of the cosine of the angle, and the cosecant is the reciprocal of the sine. As far as I can tell they’re mostly used because it’s hard to typeset $\frac{1}{\sin\left(\theta\right)}$. And to write instead $\sin^{-1}\left(\theta\right)$ would be confusing as that’s often used for the inverse sine, or arcsine, function. I don’t think these symbols help matters any. I’m surprised Carroll didn’t just flip over the cosine and sine symbols, the way he did with the cotangent.

The versed sine function is one that I got through high school without hearing about. I imagine you have too. The versed sine, or the versine, of an angle is equal to one minus the cosine of the angle. Why do we need such a thing? … Computational convenience is the best answer I can find. It turns up naturally if you’re trying to work out the distance between points on the surface of a sphere, so navigators needed to know it.

And if we need to work with small angles, then this can be more computationally stable than the cosine is. The cosine of a small angle is close to 1, and the difference between 1 and the cosine, if you need such a thing, may be lost to roundoff error. But the versed sine … well, it will be the same small number. But the table of versed sines you have to refer to will list more digits. There’s a difference between working out “1 – 0.9999” and working with “0.0001473”, if you need three digits of accuracy.

But now we don’t need printed tables of trigonometric functions to get three (or many more) digits of accuracy. So we can afford to forget the versed sine ever existed. I learn (through Wikipedia) that there are also functions called versed cosines, coversed sines, hacoversed cosines, and excosecants, among others. These names have a wonderful melody and are almost poems by themselves. Just the same I’m glad I don’t have to remember what they all are.

Carroll’s notation just replaces the “sin” or “cos” or “tan” with these symbols, so you would have the half-circle and the line followed by θ or whatever variable you used for the angle. So the symbols don’t save any space on the line. They take fewer pen strokes to write, just two for each symbol. Writing the symbols out by hand takes three or four (or for cosecant, as many as five), unless you’re writing in cursive. They’re still probably faster than the truncated words, though. So I don’t know why precisely the symbols didn’t take hold. I suppose part is that people were probably used to writing “sin θ”. And typesetters already got enough hazard pay dealing with mathematicians and their need for specialized symbols. Why add in another half-dozen or more specialized bits of type for something everyone’s already got along without?

Still, I think there might be some use in these as symbols for mathematicians in training. I’d be interested to know how they serve people just learning trigonometry.

• #### sheldonk2014 10:46 pm on Monday, 27 July, 2015 Permalink | Reply

It’s really fascinating that the writer had a thing for numbers
What you won’t find……
Sheldon

Like

• #### Joseph Nebus 4:37 am on Tuesday, 28 July, 2015 Permalink | Reply

It really stands out how much Lewis Carroll liked the playful side of mathematics. If I have a slow stretch I might just pull out various puzzles and games he developed — there were a lot of them — to show how much recreational mathematics he had to share.

Like

• #### howardat58 12:14 am on Tuesday, 28 July, 2015 Permalink | Reply

There is a crying need to get rid of sin x, along with sin^2 x (write it with superscript).
Why don’t we write y = fx for any old function, or log^2 x for the square of log x ?
Consistency is a non-feature of elementary math, and much confusion is a result.

PUT THE BRACKETS IN ! (parentheses)
sin(x), cos(x), sin(x+a), (sin(x))^2, or maybe ok with sin(x)^2
and as for \sin^{-1}\left(\theta\right) then 1/sin(x) and arcsin(x) are far far better.

Like

• #### Joseph Nebus 4:48 am on Tuesday, 28 July, 2015 Permalink | Reply

There are some fields of analysis in which writing y = fx is the thing to do. But that does require working out how you want the composition of functions to look, and what f2 ought to mean in that context.

There’s probably no way to be perfectly consistent in notation throughout mathematics. There’s just too much and some stuff is useful in some contexts that isn’t in others. That parentheses mark a group of symbols as a common unit is pretty nearly universal. We always need some aggregation term, after all, and just drawing in a vinculum won’t always cut it.

Like

• #### Thumbup 11:44 am on Tuesday, 28 July, 2015 Permalink | Reply

He is a child molester.

Like

## Reading the Comics, July 24, 2015: All The Popular Topics Are Here Edition

This week all the mathematically-themed comic strips seem to have come from Gocomics.com. Since that gives pretty stable URLs I don’t feel like I can include images of those comics. So I’m afraid it’s a bunch of text this time. I like to think you enjoy reading the text, though.

Mark Anderson’s Andertoons seemed to make its required appearance here with the July 20th strip. And the kid’s right about parentheses being very important in mathematics and “just” extra information in ordinary language. Parentheses as a way of grouping together terms appear as early as the 16th century, according to Florian Cajori. But the symbols wouldn’t become common for a couple of centuries. Cajori speculates that the use of parentheses in normal rhetoric may have slowed mathematicians’ acceptance of them. Vinculums — lines placed over a group of terms — and colons before and after the group seem to have been more popular. Leonhard Euler would use parentheses a good bit, and that settled things. Besides all his other brilliances, Euler was brilliant at setting notation. There are still other common ways of aggregating terms. But most of them are straight brackets or curled braces, which are almost the smallest possible changes from parentheses you can make.

Though his place was secure, Mark Anderson got in another strip the next day. This one’s based on the dangers of extrapolating mindlessly. One trouble with extrapolation is that if we just want to match the data we have then there are literally infinitely many possible extrapolations, each equally valid. But most of them are obvious garbage. If the high temperature the last few days was 78, 79, 80, and 81 degrees Fahrenheit, it may be literally true that we could extrapolate that to a high of 120,618 degrees tomorrow, but we’d be daft to believe it. If we understand the factors likely to affect our data we can judge what extrapolations are plausible and what ones aren’t. As ever, sanity checking, verifying that our results could be correct, is critical.

Bill Amend’s FoxTrot Classics (July 20) continues Jason’s attempts at baking without knowing the unstated assumptions of baking. See above comments about sanity checking. At least he’s ruling out the obviously silly rotation angle. (The strip originally ran the 22nd of July, 2004. You can see it in color, there, if you want to see things like that.) Some commenters have gotten quite worked up about Jason saying “degrees Kelvin” when he need only say “Kelvin”. I can’t join them. Besides the phenomenal harmlessness of saying “degrees Kelvin”, it wouldn’t quite flow for Jason to think “350 degrees” short for “350 Kelvin” instead of “350 degrees Kelvin”.

Nate Frakes’s Break of Day (July 21) is the pure number wordplay strip for this roundup. This might be my favorite of this bunch, mostly because I can imagine the way it would be staged as a bit on The Muppet Show or a similar energetic and silly show. John Atkinson’s Wrong Hands for July 23 is this roundup’s general mathematics wordplay strip. And Mark Parisi’s Off The Mark for July 22nd is the mathematics-literalist strip for this roundup.

Ruben Bolling’s Tom The Dancing Bug (July 23, rerun) is nominally an economics strip. Its premise is that since rational people do what maximizes their reward for the risk involved, then pointing out clearly how the risks and possible losses have changed will change their behavior. Underlying this are assumptions from probability and statistics. The core is the expectation value. That’s an average of what you might gain, or lose, from the different outcomes of something. That average is weighted by the probability of each outcome. A strictly rational person who hadn’t ruled anything in or out would be expected to do the thing with the highest expected gain, or the smallest expected loss. That people do not do things this way vexes folks who have not known many people.

• #### ivasallay 5:35 am on Monday, 27 July, 2015 Permalink | Reply

I liked Andertoons the best. Off the Mark had a good idea, but are there very many math majors who stock groceries?

Like

• #### Joseph Nebus 4:35 am on Tuesday, 28 July, 2015 Permalink | Reply

I’d imagine there’s a fair number of mathematics majors stocking groceries. Have you seen the job listings lately?

Like

## A Summer 2015 Mathematics A to Z Roundup

Since I’ve run out of letters there’s little dignified to do except end the Summer 2015 Mathematics A to Z. I’m still organizing my thoughts about the experience. I’m quite glad to have done it, though.

For the sake of good organization, here’s the set of pages that this project’s seen created:

## z-transform.

The z-transform comes to us from signal processing. The signal we take to be a sequence of numbers, all representing something sampled at uniformly spaced times. The temperature at noon. The power being used, second-by-second. The number of customers in the store, once a month. Anything. The sequence of numbers we take to stretch back into the infinitely great past, and to stretch forward into the infinitely distant future. If it doesn’t, then we pad the sequence with zeroes, or some other safe number that we know means “nothing”. (That’s another classic mathematician’s trick.)

It’s convenient to have a name for this sequence. “a” is a good one. The different sampled values are denoted by an index. a0 represents whatever value we have at the “start” of the sample. That might represent the present. That might represent where sampling began. That might represent just some convenient reference point. It’s the equivalent of mileage maker zero; we have to have something be the start.

a1, a2, a3, and so on are the first, second, third, and so on samples after the reference start. a-1, a-2, a-3, and so on are the first, second, third, and so on samples from before the reference start. That might be the last couple of values before the present.

So for example, suppose the temperatures the last several days were 77, 81, 84, 82, 78. Then we would probably represent this as a-4 = 77, a-3 = 81, a-2 = 84, a-1 = 82, a0 = 78. We’ll hope this is Fahrenheit or that we are remotely sensing a temperature.

The z-transform of a sequence of numbers is something that looks a lot like a polynomial, based on these numbers. For this five-day temperature sequence the z-transform would be the polynomial $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0$. (z1 is the same as z. z0 is the same as the number “1”. I wrote it this way to make the pattern more clear.)

I would not be surprised if you protested that this doesn’t merely look like a polynomial but actually is one. You’re right, of course, for this set, where all our samples are from negative (and zero) indices. If we had positive indices then we’d lose the right to call the transform a polynomial. Suppose we trust our weather forecaster completely, and add in a1 = 83 and a2 = 76. Then the z-transform for this set of data would be $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2$. You’d probably agree that’s not a polynomial, although it looks a lot like one.

The use of z for these polynomials is basically arbitrary. The main reason to use z instead of x is that we can learn interesting things if we imagine letting z be a complex-valued number. And z carries connotations of “a possibly complex-valued number”, especially if it’s used in ways that suggest we aren’t looking at coordinates in space. It’s not that there’s anything in the symbol x that refuses the possibility of it being complex-valued. It’s just that z appears so often in the study of complex-valued numbers that it reminds a mathematician to think of them.

A sound question you might have is: why do this? And there’s not much advantage in going from a list of temperatures “77, 81, 84, 81, 78, 83, 76” over to a polynomial-like expression $77 z^4 + 81 z^3 + 84 z^2 + 81 z^1 + 78 z^0 + 83 \left(\frac{1}{z}\right)^1 + 76 \left(\frac{1}{z}\right)^2$.

Where this starts to get useful is when we have an infinitely long sequence of numbers to work with. Yes, it does too. It will often turn out that an interesting sequence transforms into a polynomial that itself is equivalent to some easy-to-work-with function. My little temperature example there won’t do it, no. But consider the sequence that’s zero for all negative indices, and 1 for the zero index and all positive indices. This gives us the polynomial-like structure $\cdots + 0z^2 + 0z^1 + 1 + 1\left(\frac{1}{z}\right)^1 + 1\left(\frac{1}{z}\right)^2 + 1\left(\frac{1}{z}\right)^3 + 1\left(\frac{1}{z}\right)^4 + \cdots$. And that turns out to be the same as $1 \div \left(1 - \left(\frac{1}{z}\right)\right)$. That’s much shorter to write down, at least.

Probably you’ll grant that, but still wonder what the point of doing that is. Remember that we started by thinking of signal processing. A processed signal is a matter of transforming your initial signal. By this we mean multiplying your original signal by something, or adding something to it. For example, suppose we want a five-day running average temperature. This we can find by taking one-fifth today’s temperature, a0, and adding to that one-fifth of yesterday’s temperature, a-1, and one-fifth of the day before’s temperature a-2, and one-fifth a-3, and one-fifth a-4.

The effect of processing a signal is equivalent to manipulating its z-transform. By studying properties of the z-transform, such as where its values are zero or where they are imaginary or where they are undefined, we learn things about what the processing is like. We can tell whether the processing is stable — does it keep a small error in the original signal small, or does it magnify it? Does it serve to amplify parts of the signal and not others? Does it dampen unwanted parts of the signal while keeping the main intact?

We can understand how data will be changed by understanding the z-transform of the way we manipulate it. That z-transform turns a signal-processing idea into a complex-valued function. And we have a lot of tools for studying complex-valued functions. So we become able to say a lot about the processing. And that is what the z-transform gets us.

• #### sheldonk2014 4:45 pm on Wednesday, 22 July, 2015 Permalink | Reply

Do you go to that pinball place in New Jersey

Like

• #### Joseph Nebus 1:46 am on Thursday, 23 July, 2015 Permalink | Reply

When I’m able to, yes! Fortunately work gives me occasional chances to revisit my ancestral homeland and from there it’s a quite reasonable drive to Asbury Park and the Silverball Museum. It’s a great spot and I recommend it highly.

There’s apparently also a retro arcade in Redbank, with a dozen or so pinball machines and a fair number of old video games. I’ve not been there yet, though.

Like

• #### howardat58 2:18 am on Thursday, 23 July, 2015 Permalink | Reply

Here is a bit more.

z is used in dealing with recurrence relations and their active form, with input as well, in the form of “z transfer function:
a(n) is the input at time n, u(n) is the output at time n, these can be viewed as sequences
u(n+1) = u(n) + a(n+1) represents the integral/accumulation/sum of series for the input process
z is considered as an operator which moves the whole sequence back one step,
Applied to the sequence equation shown you get u(n+1) = zu(n),
and the equation becomes
zu(n) = u(n) + za(n)
Now since everything has (n) we don’t need it, and get
zu = u + za
Solving for u gives
u = z/(z-1)a
which describes the behaviour of the output for a given sequence of inputs
z/(z-1) is called the transfer function of the input/output system
and in this case of summation or integration the expression z/(z-1) represents the process of adding up the terms of the sequence.
One nice thing is that if you do all of this for the successive differences process
u(n+1) = a(n+1) – a(n)
you get the transfer function (z-1)/z, the discrete differentiation process.

Liked by 1 person

• #### Joseph Nebus 2:11 pm on Saturday, 25 July, 2015 Permalink | Reply

That’s a solid example of using these ideas. May I bump it up to a main post in the next couple days so that (hopefully) more people catch it?

Like

## y-axis.

It’s easy to tell where you are on a line. At least it is if you have a couple tools. One is a reference point. Another is the ability to say how far away things are. Then if you say something is a specific distance from the reference point you can pin down its location to one of at most two points. If we add to the distance some idea of direction we can pin that down to at most one point. Real numbers give us a good sense of distance. Positive and negative numbers fit the idea of orientation pretty well.

To tell where you are on a plane, though, that gets tricky. A reference point and a sense of how far things are help. Knowing something is a set distance from the reference point tells you something about its position. But there’s still an infinite number of possible places the thing could be, unless it’s at the reference point.

The classic way to solve this is to divide space into a couple directions. René Descartes made his name for himself — well, with many things. But one of them, in mathematics, was to describe the positions of things by components. One component describes how far something is in one direction from the reference point. The next component describes how far the thing is in another direction.

This sort of scheme we see as laying down axes. One, conventionally taken to be the horizontal or left-right axis, we call the x-axis. The other direction — one perpendicular, or orthogonal, to the x-axis — we call the y-axis. Usually this gets drawn as the vertical axis, the one running up and down the sheet of paper. That’s not required; it’s just convention.

We surely call it the x-axis in echo of the use of x as the name for a number whose value we don’t know right away. (That, too, is a convention Descartes gave us.) x carries with it connotations of the unknown, the sought-after, the mysterious thing to be understood. The next axis we name y because … well, that’s a letter near x and we don’t much need it for anything else, I suppose. If we need another direction yet, if we want something in space rather than a plane, then the third axis we dub the z-axis. It’s perpendicular to the x- and the y-axis directions.

These aren’t the only names for these directions, though. It’s common and often convenient to describe positions of things using vector notation. A vector describes the relative distance and orientation of things. It’s compact symbolically. It lets one think of the position of things as a single variable, a single concept. Then we can talk about a position being a certain distance in the direction of the x-axis plus a certain distance in the direction of the y-axis. And, if need be, plus some distance in the direction of the z-axis.

The direction of the x-axis is often written as $\hat{i}$, and the direction of the y-axis as $\hat{j}$. The direction of the z-axis if needed gets written $\hat{k}$. The circumflex there indicates two things. First is that the thing underneath it is a vector. Second is that it’s a vector one unit long. A vector might have any length, including zero. It’s convenient to make some mention when it’s a nice one unit long.

Another popular notation is to write the direction of the x-axis as the vector $\hat{e}_1$, and the y-axis as the vector $\hat{e}_2$, and so on. This method offers several advantages. One is that we can talk about the vector $\hat{e}_j$, that is, some particular direction without pinning down just which one. That’s the equivalent of writing “x” or “y” for a number we don’t want to commit ourselves to just yet. Another is that we can talk about axes going off in two, or three, or four, or more directions without having to pin down how many there are. And then we don’t have to think of what to call them. x- and y- and z-axes make sense. w-axis sounds a little odd but some might accept it. v-axis? u-axis? Nobody wants that, trust me.

Sometimes people start the numbering from $\hat{e}_0$ so that the y-axis is the direction $\hat{e}_1$. Usually it’s either clear from context or else it doesn’t matter.

## Reading the Comics, July 19, 2015: Rerun Comics Edition

I’m stepping my blog back away from the daily posting schedule. It’s fun, but it’s also exhausting. Sometimes, Comic Strip Master Command helps out. It slowed the rate of mathematically-themed comics just enough.

By this post’s title I don’t mean that my post is a rerun. But several of the comics mentioned happen to be. One of the good — maybe best — things about the appearance of comics on Gocomics.com and ComicsKingdom is that comic strips that have ended, such as Randolph Itch, 2 am or (alas) Cul de Sac can still appear without taking up space. And long-running comic strips such as Luann can have earlier strips be seen to a new audience, again without doing any harm to the newest generation of cartoonists. So, there’s that.

Greg Evans’s Luann Againn (July 13, originally run July 13, 1987) makes a joke of Tiffany not understanding the odds of a contest. That’s amusing enough. Estimating the probability of something happening does require estimating how many things are possible, though, and how likely they are relative to one another. Supposing that every entry in a sweepstakes is equally likely to win seems fair enough. Estimating the number of sweepstakes entries is another problem.

Tom Toles’s Randolph Itch, 2 am (July 13, rerun from July 29, 2002) tells a silly little pirates-and-algebra joke. I like this one for the silliness and the artwork. The only sad thing is there wasn’t a natural way to work equations for a circle into it, so there’d be a use for “r”.

• #### KnotTheorist 3:24 am on Monday, 20 July, 2015 Permalink | Reply

What a great collection of mathematical comics! I liked them all, but my favorite was the Saturday Morning Breakfast Cereal one about opimization.

Like

• #### Joseph Nebus 5:06 am on Tuesday, 21 July, 2015 Permalink | Reply

Quite glad you liked them. Saturday Morning Breakfast Cereal is an interesting comic strip because I have a lot of cause to talk about it here. It often gets into some mathematical concept in pretty substantial detail, in the service of setting up its joke. That’s a hard thing to do. It’s surprising it can be reasonably successful at that.

Like

## Xor.

Xor comes to us from logic. In this field we look at propositions, which can be be either true or false. Propositions serve the same rule here that variables like “x” and “y” serve in algebra. They have some value. We might know what the value is to start with. We might be hoping to deduce what the value is. We might not actually care what the value is, but need a placeholder for it while we do other work.

A variable, or a proposition, can carry some meaning. The variable “x” may represent “the longest straight board we can fit around this corner”. The proposition “A” may represent “The blue house is the one for sale”. (Logic has a couple of conventions. In one we use capital letters from the start of the alphabet for propositions. In other we use lowercase p’s and q’s and r’s and letters from that patch of the alphabet. This is a difference in dialect, not in content.) That’s convenient, since it can help us understand the meaning of a problem we’re working on, but it’s not essential. The process of solving an equation is the same whether or not the equation represents anything in the real world. So it is with logic.

We can combine propositions to make more interesting statements. If we know what whether the propositions are true or false we know whether the statements are true. If we know starting out only that the statements are true (or false) we might be able to work out whether the propositions are true or false.

Xor, the exclusive or, is one of the common combinations. Start with the propositions A and B, both of which may be true or may be false. A Xor B is a true statement when A is true while B is false, or when A is false while B is true. It’s false when A and B are simultaneously false. It’s also false when A and B are simultaneously true.

It’s the logic of whether a light bulb on a two-way switch is on. If one switch it on and the other off, the bulb is on. If both switches are on, or both switches off, the bulb is off. This is also the logic of what’s offered when the menu says you can have french fries or onion rings with your sandwich. You can get both, but it’ll cost an extra 95 cents.

• #### sheldonk2014 3:20 pm on Friday, 17 July, 2015 Permalink | Reply

Your telling me there an mathematical theory to McDonald’s craziness
I can believe it
In my mind it is a stretch
Sheldon

Liked by 1 person

• #### Joseph Nebus 5:02 am on Tuesday, 21 July, 2015 Permalink | Reply

There’s a surprising amount of mathematics behind stuff McDonald’s does, actually. Have you ever encountered the McNugget problem?

The idea dates to the early days when Chicken McNuggets were sold in packs of 6, 9, or 20. If you wanted to get 12 McNuggets, that’s easy enough: buy two packs of 6. If you want 15, buy a pack of 9 and a pack of 6. If you want 18, buy three packs of 6 or two packs of 9. If you want 26, buy a pack of 20 and a pack of 6. And so on.

But you can’t get exactly seven McNuggets. And you can’t get exactly ten. You can’t get exactly 19, either.

What’s the largest number of McNuggets you can’t buy, then?

Like

## Well-Posed Problem.

This is another mathematical term almost explained by what the words mean in English. Probably you’d guess a well-posed problem to be a question whose answer you can successfully find. This also implies that there is an answer, and that it can be found by some method other than guessing luckily.

Mathematicians demand three things of a problem to call it “well-posed”. The first is that a solution exists. The second is that a solution has to be unique. It’s imaginable there might be several answers that answer a problem. In that case we weren’t specific enough about what we’re looking for. Or we should have been looking for a set of answers instead of a single answer.

The third requirement takes some time to understand. It’s that the solution has to vary continuously with the initial conditions. That is, suppose we started with a slightly different problem. If the answer would look about the same, then the problem was well-posed to begin with. Suppose we’re looking at the problem of how a block of ice gets melted by a heater set in its center. The way that melts won’t change much if the heater is a little bit hotter, or if it’s moved a little bit off center. This heating problem is well-posed.

There are problems that don’t have this continuous variation, though. Typically these are “inverse problems”. That is, they’re problems in which you look at the outcome of something and try to say what caused it. That would be looking at the puddle of melted water and the heater and trying to say what the original block of ice looked like. There are a lot of blocks of ice that all look about the same once melted, and there’s no way of telling which was the one you started with.

You might think of these conditions as “there’s an answer, there’s only one answer, and you can find it”. That’s good enough as a memory aid, but it isn’t quite so. A problem’s solution might have this continuous variation, but still be “numerically unstable”. This is a difficulty you can run across when you try doing calculations on a computer.

You know the thing where on a calculator you type in 1 / 3 and get back 0.333333? And you multiply that by three and get 0.999999 instead of exactly 1? That’s the thing that underlies numerical instability. We want to work with numbers, but the calculator or computer will let us work with only an approximation to them. 0.333333 is close to 1/3, but isn’t exactly that.

For many calculations the difference doesn’t matter. 0.999999 is really quite close to 1. If you lost 0.000001 parts of every dollar you earned there’s a fine chance you’d never even notice. But in some calculations, numerically unstable ones, that difference matters. It gets magnified until the error created by the difference between the number you want and the number you can calculate with is too big to ignore. In that case we call the calculation we’re doing “ill-conditioned”.

And it’s possible for a problem to be well-posed but ill-conditioned. This is annoying and is why numerical mathematicians earn the big money, or will tell you they should. Trying to calculate the answer will be so likely to give something meaningless that we can’t trust the work that’s done. But often it’s possible to rework a calculation into something equivalent but well-conditioned. And a well-posed, well-conditioned problem is great. Not only can we find its solution, but we can usually have a computer do the calculations, and that’s a great breakthrough.

• #### rennydiokno2015 8:11 am on Thursday, 16 July, 2015 Permalink | Reply

Reblogged this on My Blog News.

Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r