Updates from December, 2016 Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Thursday, 29 December, 2016 Permalink | Reply
    Tags: , China, , , , , Mersenne numbers, , ,   

    The End 2016 Mathematics A To Z: Yang Hui’s Triangle 

    Today’s is another request from gaurish and another I’m glad to have as it let me learn things too. That’s a particularly fun kind of essay to have here.

    Yang Hui’s Triangle.

    It’s a triangle. Not because we’re interested in triangles, but because it’s a particularly good way to organize what we’re doing and show why we do that. We’re making an arrangement of numbers. First we need cells to put the numbers in.

    Start with a single cell in what’ll be the top middle of the triangle. It spreads out in rows beneath that. The rows are staggered. The second row has two cells, each one-half width to the side of the starting one. The third row has three cells, each one-half width to the sides of the row above, so that its center cell is directly under the original one. The fourth row has four cells, two of which are exactly underneath the cells of the second row. The fifth row has five cells, three of them directly underneath the third row’s cells. And so on. You know the pattern. It’s the one that pins in a plinko board take. Just trimmed down to a triangle. Make as many rows as you find interesting. You can always add more later.

    In the top cell goes the number ‘1’. There’s also a ‘1’ in the leftmost cell of each row, and a ‘1’ in the rightmost cell of each row.

    What of interior cells? The number for those we work out by looking to the row above. Take the cells to the immediate left and right of it. Add the values of those together. So for example the center cell in the third row will be ‘1’ plus ‘1’, commonly regarded as ‘2’. In the third row the leftmost cell is ‘1’; it always is. The next cell over will be ‘1’ plus ‘2’, from the row above. That’s ‘3’. The cell next to that will be ‘2’ plus ‘1’, a subtly different ‘3’. And the last cell in the row is ‘1’ because it always is. In the fourth row we get, starting from the left, ‘1’, ‘4’, ‘6’, ‘4’, and ‘1’. And so on.

    It’s a neat little arithmetic project. It has useful application beyond the joy of making something neat. Many neat little arithmetic projects don’t have that. But the numbers in each row give us binomial coefficients, which we often want to know. That is, if we wanted to work out (a + b) to, say, the third power, we would know what it looks like from looking at the fourth row of Yanghui’s Triangle. It will be 1\cdot a^4 + 4\cdot a^3 \cdot b^1 + 6\cdot a^2\cdot b^2 + 4\cdot a^1\cdot b^3 + 1\cdot b^4 . This turns up in polynomials all the time.

    Look at diagonals. By diagonal here I mean a line parallel to the line of ‘1’s. Left side or right side; it doesn’t matter. Yang Hui’s triangle is bilaterally symmetric around its center. The first diagonal under the edges is a bit boring but familiar enough: 1-2-3-4-5-6-7-et cetera. The second diagonal is more curious: 1-3-6-10-15-21-28 and so on. You’ve seen those numbers before. They’re called the triangular numbers. They’re the number of dots you need to make a uniformly spaced, staggered-row triangle. Doodle a bit and you’ll see. Or play with coins or pool balls.

    The third diagonal looks more arbitrary yet: 1-4-10-20-35-56-84 and on. But these are something too. They’re the tetrahedronal numbers. They’re the number of things you need to make a tetrahedron. Try it out with a couple of balls. Oranges if you’re bored at the grocer’s. Four, ten, twenty, these make a nice stack. The fourth diagonal is a bunch of numbers I never paid attention to before. 1-5-15-35-70-126-210 and so on. This is — well. We just did tetrahedrons, the triangular arrangement of three-dimensional balls. Before that we did triangles, the triangular arrangement of two-dimensional discs. Do you want to put in a guess what these “pentatope numbers” are about? Sure, but you hardly need to. If we’ve got a bunch of four-dimensional hyperspheres and want to stack them in a neat triangular pile we need one, or five, or fifteen, or so on to make the pile come out neat. You can guess what might be in the fifth diagonal. I don’t want to think too hard about making triangular heaps of five-dimensional hyperspheres.

    There’s more stuff lurking in here, waiting to be decoded. Add the numbers of, say, row four up and you get two raised to the third power. Add the numbers of row ten up and you get two raised to the ninth power. You see the pattern. Add everything in, say, the top five rows together and you get the fifth Mersenne number, two raised to the fifth power (32) minus one (31, when we’re done). Add everything in the top ten rows together and you get the tenth Mersenne number, two raised to the tenth power (1024) minus one (1023).

    Or add together things on “shallow diagonals”. Start from a ‘1’ on the outer edge. I’m going to suppose you started on the left edge, but remember symmetry; it’ll be fine if you go from the right instead. Add to that ‘1’ the number you get by moving one cell to the right and going up-and-right. And then again, go one cell to the right and then one cell up-and-right. And again and again, until you run out of cells. You get the Fibonacci sequence, 1-1-2-3-5-8-13-21-and so on.

    We can even make an astounding picture from this. Take the cells of Yang Hui’s triangle. Color them in. One shade if the cell has an odd number, another if the cell has an even number. It will create a pattern we know as the Sierpiński Triangle. (Wacław Sierpiński is proving to be the surprise special guest star in many of this A To Z sequence’s essays.) That’s the fractal of a triangle subdivided into four triangles with the center one knocked out, and the remaining triangles them subdivided into four triangles with the center knocked out, and on and on.

    By now I imagine even my most skeptical readers agree this is an interesting, useful mathematical construct. Also that they’re wondering why I haven’t said the name “Blaise Pascal”. The Western mathematical tradition knows of this from Pascal’s work, particularly his 1653 Traité du triangle arithmétique. But mathematicians like to say their work is universal, and independent of the mere human beings who find it. Constructions like this triangle give support to this. Yang lived in China, in the 12th century. I imagine it possible Pascal had hard of his work or been influenced by it, by some chain, but I know of no evidence that he did.

    And even if he had, there are other apparently independent inventions. The Avanti Indian astronomer-mathematician-astrologer Varāhamihira described the addition rule which makes the triangle work in commentaries written around the year 500. Omar Khayyám, who keeps appearing in the history of science and mathematics, wrote about the triangle in his 1070 Treatise on Demonstration of Problems of Algebra. Again so far as I am aware there’s not a direct link between any of these discoveries. They are things different people in different traditions found because the tools — arithmetic and aesthetically-pleasing orders of things — were ready for them.

    Yang Hui wrote about his triangle in the 1261 book Xiangjie Jiuzhang Suanfa. In it he credits the use of the triangle (for finding roots) was invented around 1100 by mathematician Jia Xian. This reminds us that it is not merely mathematical discoveries that are found by many peoples at many times and places. So is Boyer’s Law, discovered by Hubert Kennedy.

    • gaurish 6:46 pm on Thursday, 29 December, 2016 Permalink | Reply

      This is first time that I have read an article about Pascal triangle without a picture of it in front of me and could still imagine it in my mind. :)


      • Joseph Nebus 5:22 am on Thursday, 5 January, 2017 Permalink | Reply

        Thank you; I’m glad you like it. I did spend a good bit of time before writing the essay thinking about why it is a triangle that we use for this figure, and that helped me think about how things are organized and why. (The one thing I didn’t get into was identifying the top row, the single cell, as row zero. Computers may index things starting from zero and there may be fair reasons to do it, but that is always going to be a weird choice for humans.)

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Tuesday, 27 December, 2016 Permalink | Reply
    Tags: , , , Riemann hypothesis,   

    The End 2016 Mathematics A To Z: Xi Function 

    I have today another request from gaurish, who’s also been good enough to give me requests for ‘Y’ and ‘Z’. I apologize for coming to this a day late. But it was Christmas and many things demanded my attention.

    Xi Function.

    We start with complex-valued numbers. People discovered them because they were useful tools to solve polynomials. They turned out to be more than useful fictions, if numbers are anything more than useful fictions. We can add and subtract them easily. Multiply and divide them less easily. We can even raise them to powers, or raise numbers to them.

    If you become a mathematics major then somewhere in Intro to Complex Analysis you’re introduced to an exotic, infinitely large sum. It’s spoken of reverently as the Riemann Zeta Function, and it connects to something named the Riemann Hypothesis. Then you remember that you’ve heard of this, because if you’re willing to become a mathematics major you’ve read mathematics popularizations. And you know the Riemann Hypothesis is an unsolved problem. It proposes something that might be true or might be false. Either way has astounding implications for the way numbers fit together.

    Riemann here is Bernard Riemann, who’s turned up often in these A To Z sequences. We saw him in spheres and in sums, leading to integrals. We’ll see him again. Riemann just covered so much of 19th century mathematics; we can’t talk about calculus without him. Zeta, Xi, and later on, Gamma are the famous Greek letters. Mathematicians fall back on them because the Roman alphabet just hasn’t got enough letters for our needs. I’m writing them out as English words instead because if you aren’t familiar with them they look like an indistinct set of squiggles. Even if you are familiar, sometimes. I got confused in researching this some because I did slip between a lowercase-xi and a lowercase-zeta in my mind. All I can plead is it’s been a hard week.

    Riemann’s Zeta function is famous. It’s easy to approach. You can write it as a sum. An infinite sum, but still, those are easy to understand. Pick a complex-valued number. I’ll call it ‘s’ because that’s the standard. Next take each of the counting numbers: 1, 2, 3, and so on. Raise each of them to the power ‘s’. And take the reciprocal, one divided by those numbers. Add all that together. You’ll get something. Might be real. Might be complex-valued. Might be zero. We know many values of ‘s’ what would give us a zero. The Riemann Hypothesis is about characterizing all the possible values of ‘s’ that give us a zero. We know some of them, so boring we call them trivial: -2, -4, -6, -8, and so on. (This looks crazy. There’s another way of writing the Riemann Zeta function which makes it obvious instead.) The Riemann Hypothesis is about whether all the proper, that is, non-boring values of ‘s’ that give us a zero are 1/2 plus some imaginary number.

    It’s a rare thing mathematicians have only one way of writing. If something’s been known and studied for a long time there are usually variations. We find different ways to write the problem. Or we find different problems which, if solved, would solve the original problem. The Riemann Xi function is an example of this.

    I’m going to spare you the formula for it. That’s in self-defense. I haven’t found an expression of the Xi function that isn’t a mess. The normal ways to write it themselves call on the Zeta function, as well as the Gamma function. The Gamma function looks like factorials, for the counting numbers. It does its own thing for other complex-valued numbers.

    That said, I’m not sure what the advantages are in looking at the Xi function. The one that people talk about is its symmetry. Its value at a particular complex-valued number ‘s’ is the same as its value at the number ‘1 – s’. This may not seem like much. But it gives us this way of rewriting the Riemann Hypothesis. Imagine all the complex-valued numbers with the same imaginary part. That is, all the numbers that we could write as, say, ‘x + 4i’, where ‘x’ is some real number. If the size of the value of Xi, evaluated at ‘x + 4i’, always increases as ‘x’ starts out equal to 1/2 and increases, then the Riemann hypothesis is true. (This has to be true not just for ‘x + 4i’, but for all possible imaginary numbers. So, ‘x + 5i’, and ‘x + 6i’, and even ‘x + 4.1 i’ and so on. But it’s easier to start with a single example.)

    Or another way to write it. Suppose the size of the value of Xi, evaluated at ‘x + 4i’ (or whatever), always gets smaller as ‘x’ starts out at a negative infinitely large number and keeps increasing all the way to 1/2. If that’s true, and true for every imaginary number, including ‘x – i’, then the Riemann hypothesis is true.

    And it turns out if the Riemann hypothesis is true we can prove the two cases above. We’d write the theorem about this in our papers with the start ‘The Following Are Equivalent’. In our notes we’d write ‘TFAE’, which is just as good. Then we’d take which ever of them seemed easiest to prove and find out it isn’t that easy after all. But if we do get through we declare ourselves fortunate, sit back feeling triumphant, and consider going out somewhere to celebrate. But we haven’t got any of these alternatives solved yet. None of the equivalent ways to write it has helped so far.

    We know some some things. For example, we know there are infinitely many roots for the Xi function with a real part that’s 1/2. This is what we’d need for the Riemann hypothesis to be true. But we don’t know that all of them are.

    The Xi function isn’t entirely about what it can tell us for the Zeta function. The Xi function has its own exotic and wonderful properties. In a 2009 paper on arxiv.org, for example, Drs Yang-Hui He, Vishnu Jejjala, and Djordje Minic describe how if the zeroes of the Xi function are all exactly where we expect them to be then we learn something about a particular kind of string theory. I admit not knowing just what to say about a genus-one free energy of the topological string past what I have read in this paper. In another paper they write of how the zeroes of the Xi function correspond to the description of the behavior for a quantum-mechanical operator that I just can’t find a way to describe clearly in under three thousand words.

    But mathematicians often speak of the strangeness that mathematical constructs can match reality so well. And here is surely a powerful one. We learned of the Riemann Hypothesis originally by studying how many prime numbers there are compared to the counting numbers. If it’s true, then the physics of the universe may be set up one particular way. Is that not astounding?

    • gaurish 5:34 am on Wednesday, 28 December, 2016 Permalink | Reply

      Yes it’s astounding. You have a very nice talent of talking about mathematical quantities without showing formulas :)

      Liked by 1 person

      • Joseph Nebus 5:15 am on Thursday, 5 January, 2017 Permalink | Reply

        You’re most kind, thank you. I’ve probably gone overboard in avoiding formulas lately though.


  • Joseph Nebus 6:00 pm on Sunday, 25 December, 2016 Permalink | Reply  

    Reading the Comics, December 23, 2016: Weak Pretexts Edition 

    I’ve set the cutoff for the strips this week at Friday because you know how busy the day before Christmas is. If you don’t, then I ask that you trust me: it’s busy. If Comic Strip Master Command sent me anything worthy of comment I’ll report on it next year. I had thought this week’s set of mathematically-themed comics were a weak bunch that I had to strain to justify covering. But on looking at the whole essay … mm. I’m comfortable with it.

    Bill Amend’s FoxTrot for the 18th has two mathematical references. Writing “three kings” as “square root of nine kings” is an old sort of joke at least in mathematical circles. It’s about writing a number using some elaborate but valid expression. It’s good fun. A few years back I had a calendar with a mathematics puzzle of the day. And that was fun up until I noticed the correct answer was always the day of the month so that, say, today’s would have an answer of 25. This collapsed the calendar from being about solving problems to just verifying the solution. It’s a little change but it is one that spoiled it for me.

    And a few years back an aunt and uncle gave me an “Irrational Watch”, with the dial marked only by irrational numbers — π, for example, a little bit clockwise of where ‘3’ ought to go. The one flaw: it used the Euler-Mascheroni constant, a number that’s about 0.57, to designate that time a little past 12:30. The Euler-Mascheroni constant isn’t actually known to be irrational. It’s the way to bet, but it might just be a rational number after all.

    A binary tree, mentioned in the bottom row, is a tree for which every vertex is connected to at most three others. We’ve seen trees recently. And there’s a specially designated vertex known as the root. Each vertex (except the root) is connected to exactly one vertex that’s closer to the root. The structure looks like either the branches or the roots of a tree, depending whether the root is put at the top or bottom. And if you accept a strikingly minimal, Mid-Century Modern style drawing of a (natural) tree.

    Quincy, in monologue: 'I don't understand this math lesson. What I need most is some excuse for not doing my homework. And all I can think of right now is a city-wide blackout.'

    Ted Shearer’s Quincy for the 26th of October, 1977. Reprinted the 20th of December, 2016. There’s much that’s expressive here, although what most catches my eye is the swoopy curves of the soda straw.

    Ted Shearer’s Quincy for the 26th of October, 1977 mentions mathematics. So I’m using that as an excuse to include it. Mostly I like it for the artwork. But the mention of mathematics was strikingly arbitrary; the joke would be the same were it his English homework or Geography or anything else. I suppose mathematics got the nod because it can be written with so few letters. (Art is even more compact, but it would probably distract the reader trying to think of what would be hard to understand about an Art project homework. Difficult, if it were painting or crafting something, sure, but that’s not a challenge in understanding.)

    I don’t know what Marty Links’s Emmy Lou for the 20th was getting at. The strip originally ran the 15th of September, 1964. It seems to be referring to some ephemeral trivia passed around the summer of that year. My guess is it refers to some estimation of the unattached male and female populations of some age set and finding that there were very different numbers. That sort of result is typically done by clever definition, sometimes assisted by double-counting and other sleights of hand. It’s legitimate if you accept the definition. But before reacting too much to any surprising claim one should know just what the claim is, and why it’s that.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th is mathematics wordplay. Simpson’s Approximation, mentioned here, is a calculus thing. It’s about integrals. We use it to estimate the value of an integral. We find a parabola that resembles the original function. And then the integral of the parabola should be close to the integral of the original function. The advantage of using a parabola is that we know exactly how to integrate that. You may have noticed a lot of calculus is built on finding a problem that we can do that looks enough like the one we want to do. There’s also a Simpson’s 3/8th Approximation. It uses a cubic polynomial instead of a parabola for the approximation. We can integrate cubics exactly too. It’s called the 3/8 Approximation, or the 3/8 Rule, because the formula for it starts off with a 3/8. So now and then a mathematics thing is named appropriately. Simpson’s Approximation is named for Thomas Simpson, an 18th century mathematician and textbook writer who did show the world the 3/8 Approximation. But other people are known to have used Simpson’s non-3/8 Approximation a century or more before Simpson was born .

    Jason Poland’s Robbie and Bobby seeks much but slight attention from me this week. The first, from the 21st, riffs on the “randomness” of the random acts of kindness. Robbie strives for a truly random act. Randomness is tricky. We’re pretty sure we know what it ought to look like, but it’s so very hard to ever be sure we have randomness. We have a set of possible outcomes of whatever we’re doing; but, should every one of those outcomes be equally likely? Should some be more likely than others? Should some be almost inevitable but a few have long-shot chances? I suspect that when we say “truly random” we are thinking of a uniform distribution, with every different outcome being equally likely. That isn’t always what fits the situation.

    And then on the 23rd the strip names “Zeno’s Paradoxical Pasta”. There are several paradoxes; this surely refers to the one about not being able to cross a distance because one must always get halfway across the distance first. It’s a reliably funny idea. It’s not a paradox by itself, though. What makes the paradox is that Zeno presents several scenarios which ask that we decide whether space and time and movement can be infinitely subdivided or not, and either decision brings up new difficulties.

  • Joseph Nebus 6:00 pm on Friday, 23 December, 2016 Permalink | Reply
    Tags: , , , , ,   

    The End 2016 Mathematics A To Z: Weierstrass Function 

    I’ve teased this one before.

    Weierstrass Function.

    So you know how the Earth is a sphere, but from our normal vantage point right up close to its surface it looks flat? That happens with functions too. Here I mean the normal kinds of functions we deal with, ones with domains that are the real numbers or a Euclidean space. And ranges that are real numbers. The functions you can draw on a sheet of paper with some wiggly bits. Let the function wiggle as much as you want. Pick a part of it and zoom in close. That zoomed-in part will look straight. If it doesn’t look straight, zoom in closer.

    We rely on this. Functions that are straight, or at least straight enough, are easy to work with. We can do calculus on them. We can do analysis on them. Functions with plots that look like straight lines are easy to work with. Often the best approach to working with the function you’re interested in is to approximate it with an easy-to-work-with function. I bet it’ll be a polynomial. That serves us well. Polynomials are these continuous functions. They’re differentiable. They’re smooth.

    That thing about the Earth looking flat, though? That’s a lie. I’ve never been to any of the really great cuts in the Earth’s surface, but I have been to some decent gorges. I went to grad school in the Hudson River Valley. I’ve driven I-80 over Pennsylvania’s scariest bridges. There’s points where the surface of the Earth just drops a great distance between your one footstep and your last.

    Functions do that too. We can have points where a function isn’t differentiable, where it’s impossible to define the direction it’s headed. We can have points where a function isn’t continuous, where it jumps from one region of values to another region. Everyone knows this. We can’t dismiss those as abberations not worthy of the name “function”; too many of them are too useful. Typically we handle this by admitting there’s points that aren’t continuous and we chop the function up. We make it into a couple of functions, each stretching from discontinuity to discontinuity. Between them we have continuous region and we can go about our business as before.

    Then came the 19th century when things got crazy. This particular craziness we credit to Karl Weierstrass. Weierstrass’s name is all over 19th century analysis. He had that talent for probing the limits of our intuition about basic mathematical ideas. We have a calculus that is logically rigorous because he found great counterexamples to what we had assumed without proving.

    The Weierstrass function challenges this idea that any function is going to eventually level out. Or that we can even smooth a function out into basically straight, predictable chunks in-between sudden changes of direction. The function is continuous everywhere; you can draw it perfectly without lifting your pen from paper. But it always looks like a zig-zag pattern, jumping around like it was always randomly deciding whether to go up or down next. Zoom in on any patch and it still jumps around, zig-zagging up and down. There’s never an interval where it’s always moving up, or always moving down, or even just staying constant.

    Despite being continuous it’s not differentiable. I’ve described that casually as it being impossible to predict where the function is going. That’s an abuse of words, yes. The function is defined. Its value at a point isn’t any more random than the value of “x2” is for any particular x. The unpredictability I’m talking about here is a side effect of ignorance. Imagine I showed you a plot of “x2” with a part of it concealed and asked you to fill in the gap. You’d probably do pretty well estimating it. The Weierstrass function, though? No; your guess would be lousy. My guess would be lousy too.

    That’s a weird thing to have happen. A century and a half later it’s still weird. It gets weirder. The Weierstrass function isn’t differentiable generally. But there are exceptions. There are little dots of differentiability, where the rate at which the function changes is known. Not intervals, though. Single points. This is crazy. Derivatives are about how a function changes. We work out what they should even mean by thinking of a function’s value on strips of the domain. Those strips are small, but they’re still, you know, strips. But on almost all of that strip the derivative isn’t defined. It’s only at isolated points, a set with measure zero, that this derivative even exists. It evokes the medieval Mysteries, of how we are supposed to try, even though we know we shall fail, to understand how God can have contradictory properties.

    It’s not quite that Mysterious here. Properties like this challenge our intuition, if we’ve gotten any. Once we’ve laid out good definitions for ideas like “derivative” and “continuous” and “limit” and “function” we can work out whether results like this make sense. And they — well, they follow. We can avoid weird conclusions like this, but at the cost of messing up our definitions for what a “function” and other things are. Making those useless. For the mathematical world to make sense, we have to change our idea of what quite makes sense.

    That’s all right. When we look close we realize the Earth around us is never flat. Even reasonably flat areas have slight rises and falls. The ends of properties are marked with curbs or ditches, and bordered by streets that rise to a center. Look closely even at the dirt and we notice that as level as it gets there are still rocks and scratches in the ground, clumps of dirt an infinitesimal bit higher here and lower there. The flatness of the Earth around us is a useful tool, but we miss a lot by pretending it’s everything. The Weierstrass function is one of the ways a student mathematician learns that while smooth, predictable functions are essential, there is much more out there.

  • Joseph Nebus 6:00 pm on Wednesday, 21 December, 2016 Permalink | Reply
    Tags: , compression, , Markov Chains, , ,   

    The End 2016 Mathematics A To Z: Voronoi Diagram 

    This is one I never heard of before grad school. And not my first year in grad school either; I was pretty well past the point I should’ve been out of grad school before I remember hearing of it, somehow. I can’t explain that.

    Voronoi Diagram.

    Take a sheet of paper. Draw two dots on it. Anywhere you like. It’s your paper. But here’s the obvious thing: you can divide the paper into the parts of it that are nearer to the first, or that are nearer to the second. Yes, yes, I see you saying there’s also a line that’s exactly the same distance between the two and shouldn’t that be a third part? Fine, go ahead. We’ll be drawing that in anyway. But here we’ve got a piece of paper and two dots and this line dividing it into two chunks.

    Now drop in a third point. Now every point on your paper might be closer to the first, or closer to the second, or closer to the third. Or, yeah, it might be on an edge equidistant between two of those points. Maybe even equidistant to all three points. It’s not guaranteed there is such a “triple point”, but if you weren’t picking points to cause trouble there probably is. You get the page divided up into three regions that you say are coming together in a triangle before realizing that no, it’s a Y intersection. Or else the regions are three strips and they don’t come together at all.

    What if you have four points … You should get four regions. They might all come together in one grand intersection. Or they might come together at weird angles, two and three regions touching each other. You might get a weird one where there’s a triangle in the center and three regions that go off to the edge of the paper. Or all sorts of fun little abstract flag icons, maybe. It’s hard to say. If we had, say, 26 points all sorts of weird things could happen.

    These weird things are Voronoi Diagrams. They’re a partition of some surface. Usually it’s a plane or some well-behaved subset of the plane like a sheet of paper. The partitioning is into polygons. Exactly one of the points you start with is inside each of the polygons. And everything else inside that polygon is nearer to its one contained starting point than it is any other point. All you need for the diagram are your original points and the edges dividing spots between them. But the thing begs to be colored. Give in to it and you have your own, abstract, stained-glass window pattern. So I’m glad to give you some useful mathematics to play with.

    Voronoi diagrams turn up naturally whenever you want to divide up space by the shortest route to get something. Sometimes this is literally so. For example, a radio picking up two FM signals will switch to the stronger of the two. That’s what the superheterodyne does. If the two signals are transmitted with equal strength, then the receiver will pick up on whichever the nearer signal is. And unless the other mathematicians who’ve talked about this were just as misinformed, cell phones pick which signal tower to communicate with by which one has the stronger signal. If you could look at what tower your cell phone communicates with as you move around, you would produce a Voronoi diagram of cell phone towers in your area.

    Mathematicians hoping to get credit for a good thing may also bring up Dr John Snow’s famous halting of an 1854 cholera epidemic in London. He did this by tracking cholera outbreaks and measuring their proximity to public water pumps. He shut down the water pump at the center of the severest outbreak and the epidemic soon stopped. One could claim this as a triumph for Voronoi diagrams, although Snow can not have had this tool in mind. Georgy Voronoy (yes, the spelling isn’t consistent. Fashions in transliterating Eastern European names — Voronoy was Ukrainian and worked in Warsaw when Poland was part of the Russian Empire — have changed over the years) wasn’t even born until 1868. And it doesn’t require great mathematical insight to look for the things an infected population has in common. But mathematicians need some tales of heroism too. And it isn’t as though we’ve run out of epidemics with sources that need tracking down.

    Voronoi diagrams turned out to be useful in my own meager research. I needed to model the flow of a fluid over a whole planet, but could only do so with a modest number of points to represent the whole thing. Scattering points over the planet was easy enough. To represent the fluid over the whole planet as a collection of single values at a couple hundred points required this Voronoi-diagram type division. … Well, it used them anyway. I suppose there might have been other ways. But I’d just learned about them and was happy to find a reason to use them. Anyway, this is the sort of technique often used to turn information about a single point into approximate information about a region.

    (And I discover some amusing connections here. Voronoy’s thesis advisor was Andrey Markov, who’s the person being named by “Markov Chains”. You know those as those predictive-word things that are kind of amusing for a while. Markov Chains were part of the tool I used to scatter points over the whole planet. Also, Voronoy’s thesis was On A Generalization Of A Continuous Fraction, so, hi, Gaurish! … And one of Voronoy’s doctoral students was Wacław Sierpiński, famous for fractals and normal numbers.)

    Voronoi diagrams have a lot of beauty to them. Some of it is subtle. Take a point inside its polygon and look to a neighboring polygon. Where is the representative point inside that neighbor polygon? … There’s only one place it can be. It’s got to be exactly as far as the original point is from the edge between them, and it’s got to be on the direction perpendicular to the edge between them. It’s where you’d see the reflection of the original point if the border between them were a mirror. And that has to apply to all the polygons and their neighbors.

    From there it’s a short step to wondering: imagine you knew the edges. The mirrors. But you don’t know the original points. Could you figure out where the representative points must be to fit that diagram? … Or at least some points where they may be? This is the inverse problem, and it’s how I first encountered them. This inverse problem allows nice stuff like algorithm compression. Remember my description of the result of a Voronoi diagram being a stained glass window image? There’s no reason a stained glass image can’t be quite good, if we have enough points and enough gradations of color. And storing a bunch of points and the color for the region is probably less demanding than storing the color information for every point in the original image.

    If we want images. Many kinds of data turn out to work pretty much like pictures, set up right.

    • gaurish 5:10 am on Thursday, 22 December, 2016 Permalink | Reply

      I didn’t know that Voronoy’s thesis was on continued fractions :) Few months ago, I was delighted to see the application of Voronoi Diagram to find answer to this simple geometry problem about maximization: http://math.stackexchange.com/a/1812338/214604


      • Joseph Nebus 5:11 am on Thursday, 5 January, 2017 Permalink | Reply

        I did not know either, until I started writing the essay. I’m glad for the side bits of information I get in writing this sort of thing.

        And I’m delighted to see the problem. I didn’t think of Voronoi diagrams as a way to study maximization problems but obviously, yeah, they would be.


    • gaurish 5:04 am on Tuesday, 3 January, 2017 Permalink | Reply

      • Joseph Nebus 5:40 am on Thursday, 5 January, 2017 Permalink | Reply

        You are quite right; I do like that. And it even has a loose connection as it is to my original thesis and its work; part of the problem was getting points spread out uniformly on a plane without them spreading out infinitely far, that is, getting them to cluster according to some imposed preference. It wasn’t artistic except in the way abstract mathematics is a bit artistic.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Monday, 19 December, 2016 Permalink | Reply
    Tags: , , , links,   

    The End 2016 Mathematics A To Z: Unlink 

    This is going to be a fun one. It lets me get into knot theory again.


    An unlink is what knot theorists call that heap of loose rubber bands in that one drawer compartment.

    The longer way around. It starts with knots. I love knots. If I were stronger on abstract reasoning and weaker on computation I’d have been a knot theorist. At least graph theory anyway. The mathematical idea of a knot is inspired by a string tied together. In making it a mathematical idea we perfect the string. It becomes as thin as a line, though it curves as much as we want. It can stretch out or squash down as much as we want. It slides frictionlessly against itself. Gravity doesn’t make it drop any. This removes the hassles of real-world objects from it. It also means actual strings or yarns or whatever can’t be knots anymore. Only something that’s a loop which closes back on itself can be a knot. The knot you might make in a shoelace, to use an example, could be undone by pushing the tip back through the ‘knot’. Since our mathematical string is frictionless we can do that, effortlessly. We’re left with nothing.

    But you can create a pretty good approximation to a mathematical knot if you have some kind of cable that can be connected to its own end. Loop the thing around as you like, connect end to end, and you’ve got it. I recommend the glow sticks sold for people to take to parties or raves or the like. They’re fun. If you tie it up so that the string (rope, glow stick, whatever) can’t spread out into a simple O shape no matter how you shake it up (short of breaking the cable) then you have a knot. There are many of them. Trefoil knots are probably the easiest to get, but if you’re short on inspiration try looking at Celtic knot patterns.

    But if the string can be shaken out until it’s a simple O shape, the sort of thing you can place flat on a table, then you have an unknot. Just from the vocabulary this you see why I like the subject so. Since this hasn’t quite got silly enough, let me assure you that an unknot is itself a kind of knot; we call it the trivial knot. It’s the knot that’s too simple to be a knot. I’m sure you were worried about that. I only hear people call it an unknot, but maybe there are heritages that prefer “trivial knot”.

    So that’s knots. What happens if you have more than one thing, though? What if you have a couple of string-loops? Several cables. We know these things can happen in the real world, since we’ve looked behind the TV set or the wireless router and we know there’s somehow more cables there than there are even things to connect.

    Even mathematicians wouldn’t want to ignore something that caught up with real world implications. And we don’t. We get to them after we’re pretty comfortable working with knots. Describing them, working out the theoretical tools we’d use to un-knot a proper knot (spoiler: we cut things), coming up with polynomials that describe them, that sort of thing. When we’re ready for a new trick there we consider what happens if we have several knots. We call this bundle of knots a “link”. Well, what would you call it?

    A link is a collection of knots. By talking about a link we expect that at least some of the knots are going to loop around each other. This covers a lot of possibilities. We could picture one of those construction-paper chains, made of intertwined loops, that are good for elementary school craft projects to be a link. We can picture a keychain with a bunch of keys dangling from it to be a link. (Imagine each key is a knot, just made of a very fat, metal “string”. C’mon, you can give me that.) The mass of cables hiding behind the TV stand is not properly a link, since it’s not properly made out of knots. But if you can imagine taking the ends of each of those wires and looping them back to the origins, then the somehow vaster mess you get from that would be a link again.

    And then we come to an “unlink”. This has two pieces. The first is that it’s a collection of knots, yes, but knots that don’t interlink. We can pull them apart without any of them tugging the others along. The second piece is that each of the knots is itself an unknot. Trivial knots. Whichever you like to call them.

    The “unlink” also gets called the “trivial link”, since it’s as boring a link as you can imagine. Manifested in the real world, well, an unbroken rubber band is a pretty good unknot. And a pile of unbroken rubber bands will therefore be an unlink.

    If you get into knot theory you end up trying to prove stuff about complicated knots, and complicated links. Often these are easiest to prove by chopping up the knot or the link into something simpler. Maybe you chop those smaller pieces up again. And you can’t get simpler than an unlink. If you can prove whatever you want to show for that then you’ve got a step done toward proving your whole actually interesting thing. This is why we see unknots and unlinks enough to give them names and attention.

  • Joseph Nebus 6:00 pm on Sunday, 18 December, 2016 Permalink | Reply
    Tags: , dinosaurs,   

    Reading the Comics, December 17, 2016: Sleepy Week Edition 

    Comic Strip Master Command sent me a slow week in mathematical comics. I suppose they knew I was on somehow a busier schedule than usual and couldn’t spend all the time I wanted just writing. I appreciate that but don’t want to see another of those weeks when nothing qualifies. Just a warning there.

    'Dadburnit! I ain't never gonna git geometry!' 'Bah! Don't fret, Jughaid --- I never understood it neither! But I still manage to work all th' angles!'

    John Rose’s Barney Google and Snuffy Smith for the 12th of December, 2016. I appreciate the desire to pay attention to continuity that makes Rose draw in the coffee cup both panels, but Snuffy Smith has to swap it from one hand to the other to keep it in view there. Not implausible, just kind of busy. Also I can’t fault Jughaid for looking at two pages full of unillustrated text and feeling lost. That’s some Bourbaki-grade geometry going on there.

    John Rose’s Barney Google and Snuffy Smith for the 12th is a bit of mathematical wordplay. It does use geometry as the “hard mathematics we don’t know how to do”. That’s a change from the usual algebra. And that’s odd considering the joke depends on an idiom that is actually used by real people.

    Patrick Roberts’s Todd the Dinosaur for the 12th uses mathematics as the classic impossibly hard subject a seven-year-old can’t be expected to understand. The worry about fractions seems age-appropriate. I don’t know whether it’s fashionable to give elementary school students experience thinking of ‘x’ and ‘y’ as numbers. I remember that as a time when we’d get a square or circle and try to figure what number fits in the gap. It wasn’t a 0 or a square often enough.

    'Teacher! Todd just passed out! But he's waring one of those medic alert bracelets! ... Do not expose the wearer of this bracelet to anything mathematical, especially x's and y's, fractions, or anything that he should remember for a test!' 'Amazing how much writing they were able to fit on a little ol' T-Rex wrist!'

    Patrick Roberts’s Todd the Dinosaur for the 12th of December, 2016. Granting that Todd’s a kid dinosaur and that T-Rexes are not renowned for the hugeness of their arms, wouldn’t that still be enough space for a lot of text to fit around? I would have thought so anyway. I feel like I’m pluralizing ‘T-Rex’ wrong, but what would possibly be right? ‘Ts-rex’? Don’t make me try to spell tyrannosaurus.

    Jef Mallett’s Frazz for the 12th uses one of those great questions I think every child has. And it uses it to question how we can learn things from statistical study. This is circling around the “Bayesian” interpretation of probability, of what odds mean. It’s a big idea and I’m not sure I’m competent to explain it. It amounts to asking what explanations would be plausibly consistent with observations. As we get more data we may be able to rule some cases in or out. It can be unsettling. It demands we accept right up front that we may be wrong. But it lets us find reasonably clean conclusions out of the confusing and muddy world of actual data.

    Sam Hepburn’s Questionable Quotebook for the 14th illustrates an old observation about the hypnotic power of decimal points. I think Hepburn’s gone overboard in this, though: six digits past the decimal in this percentage is too many. It draws attention to the fakeness of the number. One, two, maybe three digits past the decimal would have a more authentic ring to them. I had thought the John Allen Paulos tweet above was about this comic, but it’s mere coincidence. Funny how that happens.

  • Joseph Nebus 6:00 pm on Friday, 16 December, 2016 Permalink | Reply
    Tags: , , , , , trees   

    The End 2016 Mathematics A To Z: Tree 

    Graph theory begins with a beautiful legend. I have no reason to suppose it’s false, except my natural suspicion of beautiful legends as origin stories. Its organization as a field is traced to 18th century Köningsburg, where seven bridges connected the banks of a river and a small island in the center. Whether it was possible to cross each bridge exactly once and get back where one started was, they say, a pleasant idle thought to ponder and path to try walking. Then Leonhard Euler solved the problem. It’s impossible.


    Graph theory arises whenever we have a bunch of things that can be connected. We call the things “vertices”, because that’s a good corner-type word. The connections we call “edges”, because that’s a good connection-type word. It’s easy to create graphs that look like the edges of a crystal, especially if you draw edges as straight as much as possible. You don’t have to. You can draw them curved. Then they look like the scary tangles of wire around your wireless router complex.

    Graph theory really got organized in the 19th century, and went crazy in the 20th. It turns out there’s lots of things that connect to other things. Networks, whether computers or social or thematically linked concepts. Anything that has to be delivered from one place to another. All the interesting chemicals. Anything that could be put in a pipe or taken on a road has some graph theory thing applicable to it.

    A lot of graph theory ponders loops. The original problem was about how to use every bridge, every edge, exactly one time. Look at a tangled mass of a graph and it’s hard not to start looking for loops. They’re often interesting. It’s not easy to tell if there’s a loop that lets you get to every vertex exactly once.

    What if there aren’t loops? What if there aren’t any vertices you can step away from and get back to by another route? Well, then you have a tree.

    A tree’s a graph where all the vertices are connected so that there aren’t any closed loops. We normally draw them with straight lines, the better to look like actual trees. We then stop trying to make them look like actual trees by doing stuff like drawing them as a long horizontal spine with a couple branches sticking off above and below, or as * type stars, or H shapes. They still correspond to real-world things. If you’re not sure how consider the layout of one of those long, single-corridor hallways as in a hotel or dormitory. The rooms connect to one another as a tree once again, as long as no room opens to anything but its own closet or bathroom or the central hallway.

    We can talk about the radius of a graph. That’s how many edges away any point can be from the center of the tree. And every tree has a center. Or two centers. If it has two centers they share an edge between the two. And that’s one of the quietly amazing things about trees to me. However complicated and messy the tree might be, we can find its center. How many things allow us that?

    A tree might have some special vertex. That’s called the ‘root’. It’s what the vertices and the connections represent that make a root; it’s not something inherent in the way trees look. We pick one for some special reason and then we highlight it. Maybe put it at the bottom of the drawing, making ‘root’ for once a sensible name for a mathematics thing. Often we put it at the top of the drawing, because I guess we’re just being difficult. Well, we do that because we were modelling stuff where a thing’s properties depend on what it comes from. And that puts us into thoughts of inheritance and of family trees. And weird as it is to put the root of a tree at the top, it’s also weird to put the eldest ancestors at the bottom of a family tree. People do it, but in those illuminated drawings that make a literal tree out of things. You don’t see it in family trees used for actual work, like filling up a couple pages at the start of a king or a queen’s biography.

    Trees give us neat new questions to ponder, like, how many are there? I mean, if you have a certain number of vertices then how many ways are there to arrange them? One or two or three vertices all have just the one way to arrange them. Four vertices can be hooked up a whole two ways. Five vertices offer a whole three different ways to connect them. Six vertices offer six ways to connect and now we’re finally getting something interesting. There’s eleven ways to connect seven vertices, and 23 ways to connect eight vertices. The number keeps on rising, but it doesn’t follow the obvious patterns for growth of this sort of thing.

    And if that’s not enough to idly ponder then think of destroying trees. Draw a tree, any shape you like. Pick one of the vertices. Imagine you obliterate that. How many separate pieces has the tree been broken into? It might be as few as two. It might be as many as the number of remaining vertices. If graph theory took away the pastime of wandering around Köningsburg’s bridges, it has given us this pastime we can create anytime we have pen, paper, and a long meeting.

  • Joseph Nebus 6:00 pm on Wednesday, 14 December, 2016 Permalink | Reply
    Tags: , , , ,   

    The End 2016 Mathematics A To Z: Smooth 

    Mathematicians affect a pose of objectivity. We justify this by working on things whose truth we can know, and which must be true whenever we accept certain rules of deduction and certain definitions and axioms. This seems fair. But we choose to pay attention to things that interest us for particular reasons. We study things we like. My A To Z glossary term for today is about one of those things we like.


    Functions. Not everything mathematicians do is functions. But functions turn up a lot. We need to set some rules. “A function” is so generic a thing we can’t handle it much. Narrow it down. Pick functions with domains that are numbers. Range too. By numbers I mean real numbers, maybe complex numbers. That gives us something.

    There’s functions that are hard to work with. This is almost all of them, so we don’t touch them unless we absolutely must. But they’re functions that aren’t continuous. That means what you imagine. The value of the function at some point is wholly unrelated to its value at some nearby point. It’s hard to work with anything that’s unpredictable like that. Functions as well as people.

    We like functions that are continuous. They’re predictable. We can make approximations. We can estimate the function’s value at some point using its value at some more convenient point. It’s easy to see why that’s useful for numerical mathematics, for calculations to approximate stuff. The dazzling thing is it’s useful analytically. We step into the Platonic-ideal world of pure mathematics. We have tools that let us work as if we had infinitely many digits of precision, for infinitely many numbers at once. And yet we use estimates and approximations and errors. We use them in ways to give us perfect knowledge; we get there by estimates.

    Continuous functions are nice. Well, they’re nicer to us than functions that aren’t continuous. But there are even nicer functions. Functions nicer to us. A continuous function, for example, can have corners; it can change direction suddenly and without warning. A differentiable function is more predictable. It can’t have corners like that. Knowing the function well at one point gives us more information about what it’s like nearby.

    The derivative of a function doesn’t have to be continuous. Grumble. It’s nice when it is, though. It makes the function easier to work with. It’s really nice for us when the derivative itself has a derivative. Nothing guarantees that the derivative of a derivative is continuous. But maybe it is. Maybe the derivative of the derivative has a derivative. That’s a function we can do a lot with.

    A function is “smooth” if it has as many derivatives as we need for whatever it is we’re doing. And if those derivatives are continuous. If this seems loose that’s because it is. A proof for whatever we’re interested in might need only the original function and its first derivative. It might need the original function and its first, second, third, and fourth derivatives. It might need hundreds of derivatives. If we look through the details of the proof we might find exactly how many derivatives we need and how many of them need to be continuous. But that’s tedious. We save ourselves considerable time by saying the function is “smooth”, as in, “smooth enough for what we need”.

    If we do want to specify how many continuous derivatives a function has we call it a “Ck function”. The C here means continuous. The ‘k’ means there are the number ‘k’ continuous derivatives of it. This is completely different from a “Ck function”, which would be one that’s a k-dimensional vector. Whether the “C” is boldface or not is important. A function might have infinitely many continuous derivatives. That we call a “C function”. That’s got wonderful properties, especially if the domain and range are complex-valued numbers. We couldn’t do Complex Analysis without it. Complex Analysis is the course students take after wondering how they’ll ever survive Real Analysis. It’s much easier than Real Analysis. Mathematics can be strange.

  • Joseph Nebus 6:00 pm on Tuesday, 13 December, 2016 Permalink | Reply
    Tags: , , , MacArthur Genius Grants, , ,   

    Reading the Comics, December 10, 2016: E = mc^2 Edition 

    And now I can finish off last week’s mathematically-themed comic strips. There’s a strong theme to them, for a refreshing change. It would almost be what we’d call a Comics Synchronicity, on Usenet group rec.arts.comics.strips, had they all appeared the same day. Some folks claiming to be open-minded would allow a Synchronicity for strips appearing on subsequent days or close enough in publication, but I won’t have any of that unless it suits my needs at the time.

    Ernie Bushmiller’s for the 6th would fit thematically better as a Cameo Edition comic. It mentions arithmetic but only because it’s the sort of thing a student might need a cheat sheet on. I can’t fault Sluggo needing help on adding eight or multiplying by six; they’re hard. Not remembering 4 x 2 is unusual. But everybody has their own hangups. The strip originally ran the 6th of December, 1949.

    People contorted to look like a 4, a 2, and a 7 bounce past Dethany's desk. She ponders: 'Performance review time ... when the company reduces people to numbers.' Wendy, previous star of the strip, tells Dethany 'You're next.' Wendy's hair is curled into an 8.

    Bill holbrook’s On The Fastrack for the 7th of December, 2016. Don’t worry about the people in the first three panels; they’re just temps, and weren’t going to appear in the comic again.

    Bill holbrook’s On The Fastrack for the 7th seems like it should be the anthropomorphic numerals joke for this essay. It doesn’t seem to quite fit the definition, but, what the heck.

    Brian Boychuk and Ron Boychuk’s The Chuckle Brothers on the 7th starts off the run of E = mc2 jokes for this essay. This one reminds me of Gary Larson’s Far Side classic with the cleaning woman giving Einstein just that little last bit of inspiration about squaring things away. It shouldn’t surprise anyone that E equalling m times c squared isn’t a matter of what makes an attractive-looking formula. There’s good reasons when one thinks what energy and mass are to realize they’re connected like that. Einstein’s famous, deservedly, for recognizing that link and making it clear.

    Mark Pett’s Lucky Cow rerun for the 7th has Claire try to use Einstein’s famous quote to look like a genius. The mathematical content is accidental. It could be anything profound yet easy to express, and it’s hard to beat the economy of “E = mc2” for both. I’d agree that it suggests Claire doesn’t know statistics well to suppose she could get a MacArthur “Genius” Grant by being overheard by a grant nominator. On the other hand, does anybody have a better idea how to get their attention?

    Harley Schwadron’s 9 to 5 for the 8th completes the “E = mc2” triptych. Calling a tie with the equation on it a power tie elevates the gag for me. I don’t think of “E = mc2” as something that uses powers, even though it literally does. I suppose what gets me is that “c” is a constant number. It’s the speed of light in a vacuum. So “c2” is also a constant number. In form the equation isn’t different from “E = m times seven”, and nobody thinks of seven as a power.

    Morrie Turner’s Wee Pals rerun for the 8th is a bit of mathematics wordplay. It’s also got that weird Morrie Turner thing going on where it feels unquestionably earnest and well-intentioned but prejudiced in that way smart 60s comedies would be.

    Sarge demands to know who left this algebra book on his desk; Zero says not him. Sarge ignores him and asks 'Who's been figuring all over my desk pad?' Zero also unnecessarily denies it. 'Come on, whose is it?!' Zero reflects, 'Gee, he *never* picks on *me*!'

    Mort Walker’s vintage Beetle Bailey for the 18th of May, 1960. Rerun the 9th of December, 2016. For me the really fascinating thing about ancient Beetle Bailey strips is that they could run today with almost no changes and yet they feel like they’re from almost a different cartoon universe from the contemporary comic. I don’t know how that is, or why it is.

    Mort Walker’s Beetle Bailey for the 18th of May, 1960 was reprinted on the 9th. It mentions mathematics — algebra specifically — as the sort of thing intelligent people do. I’m going to take a leap and suppose it’s the sort of algebra done in high school about finding values of ‘x’ rather than the mathematics-major sort of algebra, done with groups and rings and fields. I wonder when holding a mop became the signifier of not just low intelligence but low ambition. It’s subverted in Jef Mallet’s Frazz, the title character of which works as a janitor to support his exercise and music habits. But it is a standard prop to signal something.

  • Joseph Nebus 6:00 pm on Monday, 12 December, 2016 Permalink | Reply
    Tags: , , , definite integrals,   

    The End 2016 Mathematics A To Z: Riemann Sum 

    I see for the other A To Z I did this year I did something else named for Riemann. So I did. Bernhard Riemann did a lot of work that’s essential to how we see mathematics today. We name all kinds of things for him, and correctly so. Here’s one of his many essential bits of work.

    Riemann Sum.

    The Riemann Sum is a thing we learn in Intro to Calculus. It’s essential in getting us to definite integrals. We’re introduced to it in functions of a single variable. The functions have a domain that’s an interval of real numbers and a range that’s somewhere in the real numbers. The Riemann Sum — and from it, the integral — is a real number.

    We get this number by following a couple steps. The first is we chop the interval up into a bunch of smaller intervals. That chopping-up we call a “partition” because it’s another of those times mathematicians use a word the way people might use the same word. From each one of those chopped-up pieces we pick a representative point. Now with each piece evaluate what the function is for that representative point. Multiply that by the width of the partition it was in. Then take those products for each of those pieces and add them all together. If you’ve done it right you’ve got a number.

    You need a couple pieces in place to have “the” Riemann Sum for something. You need a function, which is fair enough. And you need a partitioning of the interval. And you need some representative point for each of the partitions. Change any of them — function, partition, or point — and you may change the sum you get. You expect that for changing the function. Changing the partition? That’s less obvious. But draw some wiggly curvy function on a sheet of paper. Draw a couple of partitions of the horizontal axis. (You’ll probably want to use different colors for different partitions.) That should coax you into it. And you’d probably take it on my word that different representative points give you different sums.

    Very different? It’s possible. There’s nothing stopping it from happening. But if the results aren’t very different then we might just have an integrable function. That’s a function that gives us the same Riemann Sum no matter how we pick representative points, as long as we pick partitions that get finer and finer enough. We measure how fine a partition is by how big the widest chopped-up piece is. To be integrable the Riemann Sum for a function has to get to the same number whenever the partition’s size gets small enough and however we pick points inside. We get the lovely quiet paradox in which we add together infinitely many things, each of them infinitesimally tiny, and get a regular old number out of all that work.

    We use the Riemann Sum for what we call numerical quadrature. That’s working out integrals on the computer. Or calculator. Or by hand. When we do it by evaluating numbers instead of using analysis. It’s very easy to program. And we can do some tricks based on the Riemann Sum to make the numerical estimate a closer match to the actual integral.

    And we use the Riemann Sum to learn how the Riemann Integral works. It’s a blessedly straightforward thing. It appeals to intuition well. It lets us draw all sorts of curves with rectangular boxes overlaying them. It’s so easy to work out the area of a rectangular box. We can imagine adding up these areas without being confused.

    We don’t use the Riemann Sum to actually do integrals, though. Numerical approximations to an integral, yes. For the actual integral it’s too hard to use. What makes it hard is you need to evaluate this for every possible partition and every possible pick of representative points. In grad school my analysis professor worked through — once — using this to integrate the number 1. This is the easiest possible thing to integrate and it was barely manageable. He gave a good try at integrating the function ‘f(x) = x’ but admitted he couldn’t do it. None of us could.

    When you see the Riemann Sum in an Introduction to Calculus course you see it in simplified form. You get partitions that are very easy to work with. Like, you break the interval up into some number of equally-sized chunks. You get representative points that follow one of a couple good choices. The left end of the partition. The right end of the partition. The middle of the partition.

    That’s fine, numerically. If the function is integrable it doesn’t matter what partition or representative points we pick. And it’s fine for learning about whether functions are integrable. If it matters whether you pick left or middle or right ends of the partition then the function isn’t integrable. The instructor can give functions that break integrability based on a given partition or endpoint choice or whatever.

    But that isn’t every possible partition and every possible pick of representative points. I suppose it’s possible to work all that out for a couple of really, really simple functions. But it’s so much work. We’re better off using the Riemann Sum to get to formulas about integrals that don’t depend on actually using the Riemann Sum.

    So that is the curious position the Riemann Sum has. It is a fundament of integral calculus. It is the way we first define the definite integral. We rely on it to learn what definite integrals are like. We use it all the time numerically. We never use it analytically. It’s too hard. I hope you appreciate the strange beauty of that.

  • Joseph Nebus 6:00 pm on Sunday, 11 December, 2016 Permalink | Reply
    Tags: , , , pranks, , titles   

    Reading the Comics, December 5, 2016: Cameo Appearances Edition 

    Comic Strip Master Command sent a bunch of strips my way this past week. They’ll get out to your way over this week. The first bunch are all on Gocomics.com, so I don’t feel quite fair including the strips themselves. This set also happens to be a bunch in which mathematics gets a passing mention, or is just used because they need some subject and mathematics is easy to draw into a joke. That’s all right.

    Jef Mallet’s Frazz for the 4th uses blackboard arithmetic and the iconic minor error of arithmetic. It’s also strikingly well-composed; look at the art from a little farther away. Forgetting to carry the one is maybe a perfect minor error for this sort of thing. Everyone does it, experienced mathematicians included. It’s very gradable. When someone’s learning arithmetic making this mistake is considered evidence that someone doesn’t know how to add. When someone’s learned it, making the mistake isn’t considered evidence the person doesn’t know how to add. A lot of mistakes work that way, somehow.

    Rick Stromoski’s Soup to Nutz for the 4th name-drops Fundamentals of Algebra as a devilish, ban-worthy book. Everyone feels that way. Mathematics majors get that way around two months in to their Introduction To Not That Kind Of Algebra course too. I doubt Stromoski has any particular algebra book in mind, but it doesn’t matter. The convention in mathematics books is to make titles that are ruthlessly descriptive, with not a touch of poetry to them. Among the mathematics books I have on my nearest shelf are Resnikoff and Wells’s Mathematics in Civilization; Koks’ Explorations in Mathematical Physics: The Concepts Behind An Elegant Language; Enderton’s A Mathematical Introduction To Logic; Courant, Robbins, and Stewart’s What Is Mathematics?; Murasagi’s Knot Theory And Its Applications; Nishimori’s Statistical Physics of Spin Glasses and Information Processing; Brush’s The Kind Of Motion We Call Heat, and so on. Only the Brush title has the slightest poetry to it, and it’s a history (of thermodynamics and statistical mechanics). The Courant/Robbins/Stewart has a title you could imagine on a bookstore shelf, but it’s also in part a popularization.

    It’s the convention, and it’s all right in its domain. If you are deep in the library stacks and don’t know what a books is about, the spine will tell you what the subject is. You might not know what level or depth the book is in, but you’ll know what the book is. The down side is if you remember having liked a book but not who wrote it you’re lost. Methods of Functional Analysis? Techniques in Modern Functional Analysis? … You could probably make a bingo game out of mathematics titles.

    Johnny Hart’s Back to B.C. for the 5th, a rerun from 1959, plays on the dawn of mathematics and the first thoughts of parallel lines. If parallel lines stir feelings in people they’re complicated feelings. One’s either awed at the resolute and reliable nature of the lines’ interaction, or is heartbroken that the things will never come together (or, I suppose, break apart). I can feel both sides of it.

    Dave Blazek’s Loose Parts for the 5th features the arithmetic blackboard as inspiration for a prank. It’s the sort of thing harder to do with someone’s notes for an English essay. But, to spoil the fun, I have to say in my experience something fiddled with in the middle of a board wouldn’t even register. In much the way people will read over typos, their minds seeing what should be there instead of what is, a minor mathematical error will often not be seen. The mathematician will carry on with what she thought should be there. Especially if the error is a few lines back of the latest work. Not always, though, and when it doesn’t it’s a heck of a problem. (And here I am thinking of the week, the week, I once spent stymied by a problem because I was differentiating the function ex wrong. The hilarious thing here is it is impossible to find something easier to differentiate than ex. After you differentiate it correctly you get ex. An advanced squirrel could do it right, and here I was in grad school doing it wrong.)

    Nate Creekmore’s Maintaining for the 5th has mathematics appear as the sort of homework one does. And a word problem that uses coins for whatever work it does. Coins should be good bases for word problems. They’re familiar enough and people do think about them, and if all else fails someone could in principle get enough dimes and quarters and just work it out by hand.

    Sam Hepburn’s Questionable Quotebook for the 5th uses a blackboard full of mathematics to signify a monkey’s extreme intelligence. There’s a little bit of calculus in there, an appearance of “\frac{df}{dx} ” and a mention of the limit. These are things you get right up front of a calculus course. They’ll turn up in all sorts of problems you try to do.

    Charles Schulz’s Peanuts for the 5th is not really about mathematics. Peppermint Patty just mentions it on the way to explaining the depths of her not-understanding stuff. But it’s always been one of my favorite declarations of not knowing what’s going on so I do want to share it. The strip originally ran the 8th of December, 1969.

  • Joseph Nebus 6:00 pm on Saturday, 10 December, 2016 Permalink | Reply
    Tags: , , ,   

    What Do I Need To Pass This Class? (December 2016 Edition) 

    Chatting with friends made me aware some schools have already started finals. So I’m sorry to be late with this. But for those who need it here’s my ancient post on how to calculate the minimum score you need on the final to get the grade you want in the class. And for those who see my old prose style and recoil in horror I’m sorry. I was less experienced back then. Don’t look smug; you were too. But here’s a set of tables for common grade distributions, so you don’t have to do any calculations yourself. Just look up numbers instead.

    With that information delivered, let me say once more: what you really need is to start preparing early, and consistently. Talk with your instructor about stuff you don’t understand, and stuff you think you understand, early on. Don’t give a line about the grade you need; that puts an inappropriate pressure on the instructor to grade you incorrectly. Study because it’s worth studying. Even if you don’t see why the subject is interesting, it is something that people smarter than you have spent a lot of time thinking about. It’s worth figuring out something of what they know that you don’t yet.

    • davekingsbury 11:13 pm on Sunday, 11 December, 2016 Permalink | Reply

      Have you got any tables with the answers? ;)


      • Joseph Nebus 6:06 am on Saturday, 17 December, 2016 Permalink | Reply

        So there’s this old joke about the professor hoping to draw students to the review session ahead of the exam, which is to be the classic blend of true-or-false questions, multiple-choice questions, short-answer questions, major problems. To get students in she promises that she’ll give the answer to one of the questions during the review. The review session comes and gets pretty good attendance. As she’s dismissing the class one of the students reminds her of the promise for one of the answers. And she says, ‘Very well. One of the answers is true.’

        There is sometimes a temptation to do something playful or weird with true-false questions, particularly. I remember once giving in to the temptation to make all the questions in the true-or-false section ‘true’, partly to see if students would be unnerved by too long a series of identical answers. It was a dumb idea. I don’t think most students even noticed. And if they were unnerved by too many identical answers in a row, then, I would now say, that was me screwing up. If there is a point to tests it is whether students can demonstrate mastery of a concept. It’s fair to test someone on how well they’ve understood the subtleties of the concept. Head games the teacher might be playing have absolutely nothing to do with the concept, though, so it’s poor form to mark someone down — or up! — for mastering me instead.

        Liked by 1 person

    • davekingsbury 11:32 am on Saturday, 17 December, 2016 Permalink | Reply

      Nice answer to my question! I agree that tests should give students a chance to use what they’ve learned – a much higher-order skill than box-ticking.


      • Joseph Nebus 6:44 am on Wednesday, 21 December, 2016 Permalink | Reply

        Well, I’ve come to see tests as having a couple purposes. And there is some value in box-ticking. We need to be able to think deeply about stuff, but we also need to have mastery of boring little facts o that we’re thinking about the right stuff. Multiple-choice or true/false questions are pretty good about straightening out whether someone has got definitions and basic concepts and all that. In (say) an essay it can be hard to tell whether the thing’s gone wrong because a good argument was built on bad understandings, or because the argument was lousy yet the basic concepts understood perfectly.

        Liked by 1 person

        • davekingsbury 4:00 pm on Wednesday, 21 December, 2016 Permalink | Reply

          Yes, horses for courses, as they say … I preferred doing essays probably because I could waffle for England!


          • Joseph Nebus 5:09 am on Thursday, 5 January, 2017 Permalink | Reply

            There is that. A competent essay is such a blessing in the midst of a pile of exams. They stand out from the incompetent or the incoherent pieces. Once they pass the plagiarism check.


            • davekingsbury 9:54 am on Thursday, 5 January, 2017 Permalink | Reply

              Ah plagiarism – killed coursework, unfortunately – though I think people should be allowed to quote as long as they refute or develop the ideas themselves.


  • Joseph Nebus 6:00 pm on Friday, 9 December, 2016 Permalink | Reply
    Tags: , , , evens, , , normal subgroups, odds,   

    The End 2016 Mathematics A To Z: Quotient Groups 

    I’ve got another request today, from the ever-interested and group-theory-minded gaurish. It’s another inspirational one.

    Quotient Groups.

    We all know about even and odd numbers. We don’t have to think about them. That’s why it’s worth discussing them some.

    We do know what they are, though. The integers — whole numbers, positive and negative — we can split into two sets. One of them is the even numbers, two and four and eight and twelve. Zero, negative two, negative six, negative 2,038. The other is the odd numbers, one and three and nine. Negative five, negative nine, negative one.

    What do we know about numbers, if all we look at is whether numbers are even or odd? Well, we know every integer is either an odd or an even number. It’s not both; it’s not neither.

    We know that if we start with an even number, its negative is also an even number. If we start with an odd number, its negative is also an odd number.

    We know that if we start with a number, even or odd, and add to it its negative then we get an even number. A specific number, too: zero. And that zero is interesting because any number plus zero is that same original number.

    We know we can add odds or evens together. An even number plus an even number will be an even number. An odd number plus an odd number is an even number. An odd number plus an even number is an odd number. And subtraction is the same as addition, by these lights. One number minus an other number is just one number plus negative the other number. So even minus even is even. Odd minus odd is even. Odd minus even is odd.

    We can pluck out some of the even and odd numbers as representative of these sets. We don’t want to deal with big numbers, nor do we want to deal with negative numbers if we don’t have to. So take ‘0’ as representative of the even numbers. ‘1’ as representative of the odd numbers. 0 + 0 is 0. 0 + 1 is 1. 1 + 0 is 1. The addition is the same thing we would do with the original set of integers. 1 + 1 would be 2, which is one of the even numbers, which we represent with 0. So 1 + 1 is 0. If we’ve picked out just these two numbers each is the minus of itself: 0 – 0 is 0 + 0. 1 – 1 is 1 + 1. All that gives us 0, like we should expect.

    Two paragraphs back I said something that’s obvious, but deserves attention anyway. An even plus an even is an even number. You can’t get an odd number out of it. An odd plus an odd is an even number. You can’t get an odd number out of it. There’s something fundamentally different between the even and the odd numbers.

    And now, kindly reader, you’ve learned quotient groups.

    OK, I’ll do some backfilling. It starts with groups. A group is the most skeletal cartoon of arithmetic. It’s a set of things and some operation that works like addition. The thing-like-addition has to work on pairs of things in your set, and it has to give something else in the set. There has to be a zero, something you can add to anything without changing it. We call that the identity, or the additive identity, because it doesn’t change something else’s identity. It makes sense if you don’t stare at it too hard. Everything has an additive inverse. That is everything has a “minus”, that you can add to it to get zero.

    With odd and even numbers the set of things is the integers. The thing-like-addition is, well, addition. I said groups were based on how normal arithmetic works, right?

    And then you need a subgroup. A subgroup is … well, it’s a subset of the original group that’s itself a group. It has to use the same addition the original group does. The even numbers are such a subgroup of the integers. Formally they make something called a “normal subgroup”, which is a little too much for me to explain right now. If your addition works like it does for normal numbers, that is, “a + b” is the same thing as “b + a”, then all your subgroups are normal groups. Yes, it can happen that they’re not. If the addition is something like rotations in three-dimensional space, or swapping the order of things, then the order you “add” things in matters.

    We make a quotient group by … OK, this isn’t going to sound like anything. It’s a group, though, like the name says. It uses the same addition that the original group does. Its set, though, that’s itself made up of sets. One of the sets is the normal subgroup. That’s the easy part.

    Then there’s something called cosets. You make a coset by picking something from the original group and adding it to everything in the subgroup. If the thing you pick was from the original subgroup that’s just going to be the subgroup again. If you pick something outside the original subgroup then you’ll get some other set.

    Starting from the subgroup of even numbers there’s not a lot to do. You can get the even numbers and you get the odd numbers. Doesn’t seem like much. We can do otherwise though. Suppose we start from the subgroup of numbers divisible by 4, though. That’s 0, 4, 8, 12, -4, -8, -12, and so on. Now there’s three cosets we can make from that. We can start with the original set of numbers. Or we have 1 plus that set: 1, 5, 9, 13, -3, -7, -11, and so on. Or we have 2 plus that set: 2, 6, 10, 14, -2, -6, -10, and so on. Or we have 3 plus that set: 3, 7, 11, 15, -1, -5, -9, and so on. None of these others are subgroups, which is why we don’t call them subgroups. We call them cosets.

    These collections of cosets, though, they’re the pieces of a new group. The quotient group. One of them, the normal subgroup you started with, is the identity, the thing that’s as good as zero. And you can “add” the cosets together, in just the same way you can add “odd plus odd” or “odd plus even” or “even plus even”.

    For example. Let me start with the numbers divisible by 4. I will have so much a better time if I give this a name. I’ll pick ‘Q’. This is because, you know, quarters, quartet, quadrilateral, this all sounds like four-y stuff. The integers — the integers have a couple of names. ‘I’, ‘J’, and ‘Z’ are the most common ones. We get ‘Z’ from German; a lot of important group theory was done by German-speaking mathematicians. I’m used to it so I’ll stick with that. The quotient group ‘Z / Q’, read “Z modulo Q”, has (it happens) four cosets. One of them is Q. One of them is “1 + Q”, that set 1, 5, 9, and so on. Another of them is “2 + Q”, that set 2, 6, 10, and so on. And the last is “3 + Q”, that set 3, 7, 11, and so on.

    And you can add them together. 1 + Q plus 1 + Q turns out to be 2 + Q. Try it out, you’ll see. 1 + Q plus 2 + Q turns out to be 3 + Q. 2 + Q plus 2 + Q is Q again.

    The quotient group uses the same addition as the original group. But it doesn’t add together elements of the original group, or even of the normal subgroup. It adds together sets made from the normal subgroup. We’ll denote them using some form that looks like “a + N”, or maybe “a N”, if ‘N’ was the normal subgroup and ‘a’ something that wasn’t in it. (Sometimes it’s more convenient writing the group operation like it was multiplication, because we do that by not writing anything at all, which saves us from writing stuff.)

    If we’re comfortable with the idea that “odd plus odd is even” and “even plus odd is odd” then we should be comfortable with adding together quotient groups. We’re not, not without practice, but that’s all right. In the Introduction To Not That Kind Of Algebra course mathematics majors take they get a lot of practice, just in time to be thrown into rings.

    Quotient groups land on the mathematics major as a baffling thing. They don’t actually turn up things from the original group. And they lead into important theorems. But to an undergraduate they all look like text huddling up to ladders of quotient groups. We’re told these are important theorems and they are. They also go along with beautiful diagrams of how these quotient groups relate to each other. But they’re hard going. It’s tough finding good examples and almost impossible to explain what a question is. It comes as a relief to be thrown into rings. By the time we come back around to quotient groups we’ve usually had enough time to get used to the idea that they don’t seem so hard.

    Really, looking at odds and evens, they shouldn’t be so hard.

    • gaurish 9:10 am on Saturday, 10 December, 2016 Permalink | Reply

      Thanks! When I first learnt about quotient groups (two years ago) I visualized them as the equivalence classes we create so as to have a better understanding of a bigger group (since my study of algebra has been motivated by its need in Number theory as a generalization of modulo arithmetic). Then the isomorphism theorems just changed the way I look at quotient of an algebraic structure. See: http://math.stackexchange.com/q/1816921/214604


      • Joseph Nebus 5:47 am on Saturday, 17 December, 2016 Permalink | Reply

        I’m glad that you liked. I do think equivalence classes are the easiest way into quotient groups — it’s essentially what I did here — but that’s because people get introduced to equivalence classes without knowing what they are. Odd and even numbers, for example, or checking arithmetic by casting out nines are making use of these classes. Isomorphism theorems are great and substantial but they do take so much preparation to get used to. Probably shifting from the first to the second is the sign of really mastering the idea of a quotient group.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Wednesday, 7 December, 2016 Permalink | Reply
    Tags: , , , inverse functions, principal values,   

    The End 2016 Mathematics A To Z: Principal 

    Functions. They’re at the center of so much mathematics. They have three pieces: a domain, a range, and a rule. The one thing functions absolutely must do is match stuff in the domain to one and only one thing in the range. So this is where it gets tricky.


    Thing with this one-and-only-one thing in the range is it’s not always practical. Sometimes it only makes sense to allow for something in the domain to match several things in the range. For example, suppose we have the domain of positive numbers. And we want a function that gives us the numbers which, squared, are whatever the original function was. For any positive real number there’s two numbers that do that. 4 should match to both +2 and -2.

    You might ask why I want a function that tells me the numbers which, squared, equal something. I ask back, what business is that of yours? I want a function that does this and shouldn’t that be enough? We’re getting off to a bad start here. I’m sorry; I’ve been running ragged the last few days. I blame the flat tire on my car.

    Anyway. I’d want something like that function because I’m looking for what state of things makes some other thing true. This turns up often in “inverse problems”, problems in which we know what some measurement is and want to know what caused the measurement. We do that sort of problem all the time.

    We can handle these multi-valued functions. Of course we can. Mathematicians are as good at loopholes as anyone else is. Formally we declare that the range isn’t the real numbers but rather sets of real numbers. My what-number-squared function then matches ‘4’ in the domain to the set of numbers ‘+2 and -2’. The set has several things in it, but there’s just the one set. Clever, huh?

    This sort of thing turns up a lot. There’s two numbers that, squared, give us any real number (except zero). There’s three numbers that, squared, give us any real number (again except zero). Polynomials might have a whole bunch of numbers that make some equation true. Trig functions are worse. The tangent of 45 degrees equals 1. So is the tangent of 225 degrees. Also 405 degrees. Also -45 degrees. Also -585 degrees. OK, a mathematician would use radians instead of degrees, but that just changes what the numbers are. Not that there’s infinitely many of them.

    It’s nice to have options. We don’t always want options. Sometimes we just want one blasted simple answer to things. It’s coded into the language. We say “the square root of four”. We speak of “the arctangent of 1”, which is to say, “the angle with tangent of 1”. We only say “all square roots of four” if we’re making a point about overlooking options.

    If we’ve got a set of things, then we can pick out one of them. This is obvious, which means it is so very hard to prove. We just have to assume we can. Go ahead; assume we can. Our pick of the one thing out of this set is the “principal”. It’s not any more inherently right than the other possibilities. It’s just the one we choose to grab first.

    So. The principal square root of four is positive two. The principal arctangent of 1 is 45 degrees, or in the dialect of mathematicians π divided by four. We pick these values over other possibilities because they’re nice. What makes them nice? Well, they’re nice. Um. Most of their numbers aren’t that big. They use positive numbers if we have a choice in the matter. Deep down we still suspect negative numbers of being up to something.

    If nobody says otherwise then the principal square root is the positive one, or the one with a positive number in front of the imaginary part. If nobody says otherwise the principal arcsine is between -90 and +90 degrees (-π/2 and π/2). The principal arccosine is between 0 and 180 degrees (0 and π), unless someone says otherwise. The principal arctangent is … between -90 and 90 degrees, unless it’s between 0 and 180 degrees. You can count on the 0 to 90 part. Use your best judgement and roll with whatever develops for the other half of the range there. There’s not one answer that’s right for every possible case. The point of a principal value is to pick out one answer that’s usually a good starting point.

    When you stare at what it means to be a function you realize that there’s a difference between the original function and the one that returns the principal value. The original function has a range that’s “sets of values”. The principal-value version has a range that’s just one value. If you’re being kind to your audience you make some note of that. Usually we note this by capitalizing the start of the function: “arcsin z” gives way to “Arcsin z”. “Log z” would be the principal-value version of “log z”. When you start pondering logarithms for negative numbers or for complex-valued numbers you get multiple values. It’s the same way that the arcsine function does.

    And it’s good to warn your audience which principal value you mean, especially for the arc-trigonometric-functions or logarithms. (I’ve never seen someone break the square root convention.) The principal value is about picking the most obvious and easy-to-work-with value out of a set of them. It’s just impossible to get everyone to agree on what the obvious is.

  • Joseph Nebus 6:00 pm on Tuesday, 6 December, 2016 Permalink | Reply
    Tags: , , , , , , ,   

    How November 2016 Treated My Mathematics Blog 

    I didn’t forget about reviewing my last month’s readership statistics. I just ran short on time to gather and publish results is all. But now there’s an hour or so free to review that WordPress says my readership was like in November and I can see what was going on.


    So, that was a bit disappointing. The start of an A To Z Glossary usually sees a pretty good bump in my readership. The steady publishing of a diverse set of articles usually helps. My busiest months have always been ones with an A To Z series going on. This November, though, there were 923 page views around here, from 575 distinct visitors. That’s up from October, with 907 page views and 536 distinct visitors. But it’s the same as September’s 922 page views from 575 distinct visitors. I blame the US presidential election. I don’t think it’s just that everyone I can still speak to was depressed by it. My weekly readership the two weeks after the election were about three-quarters that of the week before or the last two weeks of November. I’d be curious what other people saw. My humor blog didn’t see as severe a crash the week of the 14th, though.

    Well, the people who were around liked what they saw. There were 157 pages liked in November, up from 115 in September and October. That’s lower than what June and July, with Theorem Thursdays posts, had, and below what the A To Z in March and April drew. But it’s up still. Comments were similarly up, to 35 in November from October’s 24 and September’s 20. That’s up to around what Theorem Thursdays attracted.

    December starts with my mathematics blog having had 43,145 page views from a reported 18,022 distinct viewers. And it had 636 WordPress.com followers. You can be among them by clicking the “Follow” button on the upper right corner. It’s up from the 626 WordPress.com followers I had at the start of November. That’s not too bad, considering.

    I had a couple of perennial favorites among the most popular articles in November:

    This is the first time I can remember that a Reading The Comics post didn’t make the top five.

    Sundays are the most popular days for reading posts here. 18 percent of page views come that day. I suppose that’s because I have settled on Sunday as a day to reliably post Reading the Comics essays. The most popular hour is 6 pm, which drew 11 percent of page views. In October Sundays were the most popular day, with 18 percent of page views. 6 pm as the most popular hour, but then it drew 14 percent of page views. Same as September. I don’t know why 6 pm is so special.

    As ever there wasn’t any search term poetry. But there were some good searches, including:

    • how many different ways can you draw a trapizium
    • comics back ground of the big bang nucleosynthesis
    • why cramer’s rule sucks (well, it kinda does)
    • oliver twist comic strip digarm
    • work standard approach sample comics
    • what is big bang nucleusynthesis comics strip

    I don’t understand the Oliver Twist or the nucleosynthesis stuff.

    And now the roster of countries and their readership, which for some reason is always popular:

    Country Page Views
    United States 534
    United Kingdom 78
    India 36
    Canada 33
    Philippines 22
    Germany 21
    Austria 18
    Puerto Rico 17
    Slovenia 14
    Singapore 13
    France 12
    Sweden 8
    Spain 8
    New Zealand 7
    Australia 6
    Israel 6
    Pakistan 5
    Hong Kong SAR China 4
    Portugal 4
    Belgium 3
    Colombia 3
    Netherlands 3
    Norway 3
    Serbia 3
    Thailand 3
    Brazil 2
    Croatia 2
    Finland 2
    Malaysia 2
    Poland 2
    Switzerland 2
    Argentina 1
    Bulgaria 1
    Cameroon 1
    Cyprus 1
    Czech Republic 1 (***)
    Denmark 1
    Japan 1 (*)
    Lithuania 1
    Macedonia 1
    Mexico 1 (*)
    Russia 1
    Saudi Arabia 1 (*)
    South Africa 1 (*)
    United Arab Emirates 1 (*)
    Vietnam 1

    That’s 46 countries, the same as last month. 15 of them were single-reader countries; there were 20 single-reader countries in October. Japan, Mexico, Saudi Arabia, South Africa, and the United Arab Emirates have been single-reader countries for two months running. Czech has been one for four months.

    Always happy to see Singapore reading me (I taught there for several years). The “European Union” listing seems to have vanished, here and on my humor blog. I’m sure that doesn’t signal anything ominous at all.

  • Joseph Nebus 6:00 pm on Monday, 5 December, 2016 Permalink | Reply
    Tags: , , , , Daffy Duck, , , , ,   

    The End 2016 Mathematics A To Z: Osculating Circle 

    I’m happy to say it’s another request today. This one’s from HowardAt58, author of the Saving School Math blog. He’s given me some great inspiration in the past.

    Osculating Circle.

    It’s right there in the name. Osculating. You know what that is from that one Daffy Duck cartoon where he cries out “Greetings, Gate, let’s osculate” while wearing a moustache. Daffy’s imitating somebody there, but goodness knows who. Someday the mystery drives the young you to a dictionary web site. Osculate means kiss. This doesn’t seem to explain the scene. Daffy was imitating Jerry Colonna. That meant something in 1943. You can find him on old-time radio recordings. I think he’s funny, in that 40s style.

    Make the substitution. A kissing circle. Suppose it’s not some playground antic one level up from the Kissing Bandit that plagues recess yet one or two levels down what we imagine we’d do in high school. It suggests a circle that comes really close to something, that touches it a moment, and then goes off its own way.

    But then touching. We know another word for that. It’s the root behind “tangent”. Tangent is a trigonometry term. But it appears in calculus too. The tangent line is a line that touches a curve at one specific point and is going in the same direction as the original curve is at that point. We like this because … well, we do. The tangent line is a good approximation of the original curve, at least at the tangent point and for some region local to that. The tangent touches the original curve, and maybe it does something else later on. What could kissing be?

    The osculating circle is about approximating an interesting thing with a well-behaved thing. So are similar things with names like “osculating curve” or “osculating sphere”. We need that a lot. Interesting things are complicated. Well-behaved things are understood. We move from what we understand to what we would like to know, often, by an approximation. This is why we have tangent lines. This is why we build polynomials that approximate an interesting function. They share the original function’s value, and its derivative’s value. A polynomial approximation can share many derivatives. If the function is nice enough, and the polynomial big enough, it can be impossible to tell the difference between the polynomial and the original function.

    The osculating circle, or sphere, isn’t so concerned with matching derivatives. I know, I’m as shocked as you are. Well, it matches the first and the second derivatives of the original curve. Anything past that, though, it matches only by luck. The osculating circle is instead about matching the curvature of the original curve. The curvature is what you think it would be: it’s how much a function curves. If you imagine looking closely at the original curve and an osculating circle they appear to be two arcs that come together. They must touch at one point. They might touch at others, but that’s incidental.

    Osculating circles, and osculating spheres, sneak out of mathematics and into practical work. This is because we often want to work with things that are almost circles. The surface of the Earth, for example, is not a sphere. But it’s only a tiny bit off. It’s off in ways that you only notice if you are doing high-precision mapping. Or taking close measurements of things in the sky. Sometimes we do this. So we map the Earth locally as if it were a perfect sphere, with curvature exactly what its curvature is at our observation post.

    Or we might be observing something moving in orbit. If the universe had only two things in it, and they were the correct two things, all orbits would be simple: they would be ellipses. They would have to be “point masses”, things that have mass without any volume. They never are. They’re always shapes. Spheres would be fine, but they’re never perfect spheres even. The slight difference between a perfect sphere and whatever the things really are affects the orbit. Or the other things in the universe tug on the orbiting things. Or the thing orbiting makes a course correction. All these things make little changes in the orbiting thing’s orbit. The actual orbit of the thing is a complicated curve. The orbit we could calculate is an osculating — well, an osculating ellipse, rather than an osculating circle. Similar idea, though. Call it an osculating orbit if you’d rather.

    That osculating circles have practical uses doesn’t mean they aren’t respectable mathematics. I’ll concede they’re not used as much as polynomials or sine curves are. I suppose that’s because polynomials and sine curves have nicer derivatives than circles do. But osculating circles do turn up as ways to try solving nonlinear differential equations. We need the help. Linear differential equations anyone can solve. Nonlinear differential equations are pretty much impossible. They also turn up in signal processing, as ways to find the frequencies of a signal from a sampling of data. This, too, we would like to know.

    We get the name “osculating circle” from Gottfried Wilhelm Leibniz. This might not surprise. Finding easy-to-understand shapes that approximate interesting shapes is why we have calculus. Isaac Newton described a way of making them in the Principia Mathematica. This also might not surprise. Of course they would on this subject come so close together without kissing.

  • Joseph Nebus 6:00 pm on Sunday, 4 December, 2016 Permalink | Reply
    Tags: , , , egg salad, Eureka, , ,   

    Reading the Comics, December 3, 2016: Cute Little Jokes Edition 

    Comic Strip Master Command apparently wanted me to have a bunch of easy little pieces that don’t inspire rambling essays. Message received!

    Mark Litzler’s Joe Vanilla for the 27th is a wordplay joke in which any mathematical content is incidental. It could be anything put in a positive light; numbers are just easy things to arrange so. From the prominent appearance of ‘3’ and ‘4’ I supposed Litzler was using the digits of π, but if he is, it’s from some part of π that I don’t recognize. (That would be any part after the seventeenth digit. I’m not obsessive about π digits.)

    Samson’s Dark Side Of The Horse is whatever the equivalent of punning is for Roman Numerals. I like Horace blushing.

    John Deering’s Strange Brew for the 28th is a paint-by-numbers joke, and one I don’t see done often. And there is beauty in the appearance of mathematics. It’s not appreciated enough. I think looking at the tables of integral formulas on the inside back cover of a calculus book should prove the point, though. All those rows of integral signs and sprawls of symbols after show this abstract beauty. I’ve surely mentioned the time when the creative-arts editor for my undergraduate leftist weekly paper asked for a page of mathematics or physics work to include as an picture, too. I used the problem that inspired my “Why Stuff Can Orbit” sequence over on my mathematics blog. The editor loved the look of it all, even if he didn’t know what most of it meant.

    Scientisty type: 'Yessirree, there we have it: I've just proved we're completely alone in a cold, dying universe'. This provokes an angry stare from the other scientisty type, who's wearing an 'I Believe' grey-aliens T-shirt.

    Niklas Eriksson’s Carpe Diem for the 29th of November, 2016. I’m not sure why this has to be worked out in the break room but I guess you work out life where you do. Anyway, I’m glad to see the Grey Aliens allow for Green Aliens representing them on t-shirts.

    Niklas Eriksson’s Carpe Diem for the 29th is a joke about life, I suppose. It uses a sprawled blackboard full of symbols to play the part of the proof. It’s gibberish, of course, although I notice how many mathematics cliches get smooshed into it. There’s a 3.1451 — I assume that’s a garbed digits of π — under a square root sign. There’s an “E = mc”, I suppose a garbled bit of Einstein’s Famous Equation in there. There’s a “cos 360”. 360 evokes the number of degrees in a circle, but mathematicians don’t tend to use degrees. There’s analytic reasons why we find it nicer to use radians, for which the equivalent would be “cos 2π”. If we wrote that at all, since the cosine of 2π is one of the few cosines everyone knows. Every mathematician knows. It’s 1. Well, maybe the work just got to that point and it hasn’t been cleaned up.

    Eriksson’s Carpe Diem reappears the 30th, with a few blackboards with less mathematics to suggest someone having a creative block. It does happen to us all. My experience is mathematicians don’t tend to say “Eureka” when we do get a good idea, though. It’s more often some vague mutterings and “well what if” while we form the idea. And then giggling or even laughing once we’re sure we’ve got something. This may be just me and my friends. But it is a real rush when we have it.

    Dan Collins’s Looks Good On Paper for the 29t tells the Möbius strip joke. It’s a well-rendered one, though; I like that there is a readable strip in there and that it’s distorted to fit the geometry.

    Henry Scarpelli and Craig Boldman’s Archie rerun for the 2nd of December tosses off the old gag about not needing mathematics now that we have calculators. It’s not a strip about that, and that’s fine.

    Jughead and Archie reflect how it seems like a waste to learn mathematics when they have calculators, or spelling when they have spell-checkers. Archie suggests getting a snack and his dad says he's *got* a garbage disposal.

    Henry Scarpelli and Craig Boldman’s Archie rerun for the 2nd of December, 2016. Now, not to nitpick, but Jughead and Archie don’t declare it *is* a waste of time to learn mathematics or spelling when a computer can do that work. Also, why don’t we have a word like ‘calculator’ for ‘spell-checker’? I mean, yes, ‘spell-checker’ is an acceptable word, but it’s like calling a ‘calculator’ an ‘arithmetic-doer’.

    Mark Anderson’s Andertoons finally appeared the 2nd. It’s a resistant-student joke. And a bit of wordplay.

    Ruben Bolling’s Super-Fun-Pak Comix from the 2nd featured an installment of Tautological But True. One might point out they’re using “average” here to mean “arithmetic mean”. There probably isn’t enough egg salad consumed to let everyone have a median-sized serving. And I wouldn’t make any guesses about the geometric mean serving. But the default meaning of “average” is the arithmetic mean. Anyone using one of the other averages would say so ahead of time or else is trying to pull something.

  • Joseph Nebus 6:00 pm on Friday, 2 December, 2016 Permalink | Reply
    Tags: , , , , , , , ,   

    The End 2016 Mathematics A To Z: Normal Numbers 

    Today’s A To Z term is another of gaurish’s requests. It’s also a fun one so I’m glad to have reason to write about it.

    Normal Numbers

    A normal number is any real number you never heard of.

    Yeah, that’s not what we say a normal number is. But that’s what a normal number is. If we could imagine the real numbers to be a stream, and that we could reach into it and pluck out a water-drop that was a single number, we know what we would likely pick. It would be an irrational number. It would be a transcendental number. And it would be a normal number.

    We know normal numbers — or we would, anyway — by looking at their representation in digits. For example, π is a number that starts out 3.1415926535897931159979634685441851615905 and so on forever. Look at those digits. Some of them are 1’s. How many? How many are 2’s? How many are 3’s? Are there more than you would expect? Are there fewer? What would you expect?

    Expect. That’s the key. What should we expect in the digits of any number? The numbers we work with don’t offer much help. A whole number, like 2? That has a decimal representation of a single ‘2’ and infinitely many zeroes past the decimal point. Two and a half? A single ‘2, a single ‘5’, and then infinitely many zeroes past the decimal point. One-seventh? Well, we get infinitely many 1’s, 4’s, 2’s, 8’s, 5’s, and 7’s. Never any 3’s, nor any 0’s, nor 6’s or 9’s. This doesn’t tell us anything about how often we would expect ‘8’ to appear in the digits of π.

    In an normal number we get all the decimal digits. And we get each of them about one-tenth of the time. If all we had was a chart of how often digits turn up we couldn’t tell the summary of one normal number from the summary of any other normal number. Nor could we tell either from the summary of a perfectly uniform randomly drawn number.

    It goes beyond single digits, though. Look at pairs of digits. How often does ’14’ turn up in the digits of a normal number? … Well, something like once for every hundred pairs of digits you draw from the random number. Look at triplets of digits. ‘141’ should turn up about once in every thousand sets of three digits. ‘1415’ should turn up about once in every ten thousand sets of four digits. Any finite string of digits should turn up, and exactly as often as any other finite string of digits.

    That’s in the full representation. If you look at all the infinitely many digits the normal number has to offer. If all you have is a slice then some digits are going to be more common and some less common. That’s similar to how if you fairly toss a coin (say) forty times, there’s a good chance you’ll get tails something other than exactly twenty times. Look at the first 35 or so digits of π there’s not a zero to be found. But as you survey more digits you get closer and closer to the expected average frequency. It’s the same way coin flips get closer and closer to 50 percent tails. Zero is a rarity in the first 35 digits. It’s about one-tenth of the first 3500 digits.

    The digits of a specific number are not random, not if we know what the number is. But we can be presented with a subset of its digits and have no good way of guessing what the next digit might be. That is getting into the same strange territory in which we can speak about the “chance” of a month having a Friday the 13th even though the appearances of Fridays the 13th have absolutely no randomness to them.

    This has staggering implications. Some of them inspire an argument in science fiction Usenet newsgroup rec.arts.sf.written every two years or so. Probably it does so in other venues; Usenet is just my first home and love for this. In a minor point in Carl Sagan’s novel Cosmos possibly-imaginary aliens reveal there’s a pattern hidden in the digits of π. (It’s not in the movie version, which is a shame. But to include it would require people watching a computer. So that could not make for a good movie scene, we now know.) Look far enough into π, says the book, and there’s suddenly a string of digits that are nearly all zeroes, interrupted with a few ones. Arrange the zeroes and ones into a rectangle and it draws a pixel-art circle. And the aliens don’t know how something astounding like that could be.

    Nonsense, respond the kind of science fiction reader that likes to identify what the nonsense in science fiction stories is. (Spoiler: it’s the science. In this case, the mathematics too.) In a normal number every finite string of digits appears. It would be truly astounding if there weren’t an encoded circle in the digits of π. Indeed, it would be impossible for there not to be infinitely many circles of every possible size encoded in every possible way in the digits of π. If the aliens are amazed by that they would be amazed to find how every triangle has three corners.

    I’m a more forgiving reader. And I’ll give Sagan this amazingness. I have two reasons. The first reason is on the grounds of discoverability. Yes, the digits of a normal number will have in them every possible finite “message” encoded every possible way. (I put the quotes around “message” because it feels like an abuse to call something a message if it has no sender. But it’s hard to not see as a “message” something that seems to mean something, since we live in an era that accepts the Death of the Author as a concept at least.) Pick your classic cypher `1 = A, 2 = B, 3 = C’ and so on, and take any normal number. If you look far enough into its digits you will find every message you might ever wish to send, every book you could read. Every normal number holds Jorge Luis Borges’s Library of Babel, and almost every real number is a normal number.

    But. The key there is if you look far enough. Look above; the first 35 or so digits of π have no 0’s, when you would expect three or four of them. There’s no 22’s, even though that number has as much right to appear as does 15, which gets in at least twice that I see. And we will only ever know finitely many digits of π. It may be staggeringly many digits, sure. It already is. But it will never be enough to be confident that a circle, or any other long enough “message”, must appear. It is staggering that a detectable “message” that long should be in the tiny slice of digits that we might ever get to see.

    And it’s harder than that. Sagan’s book says the circle appears in whatever base π gets represented in. So not only does the aliens’ circle pop up in base ten, but also in base two and base sixteen and all the other, even less important bases. The circle happening to appear in the accessible digits of π might be an imaginable coincidence in some base. There’s infinitely many bases, one of them has to be lucky, right? But to appear in the accessible digits of π in every one of them? That’s staggeringly impossible. I say the aliens are correct to be amazed.

    Now to my second reason to side with the book. It’s true that any normal number will have any “message” contained in it. So who says that π is a normal number?

    We think it is. It looks like a normal number. We have figured out many, many digits of π and they’re distributed the way we would expect from a normal number. And we know that nearly all real numbers are normal numbers. If I had to put money on it I would bet π is normal. It’s the clearly safe bet. But nobody has ever proved that it is, nor that it isn’t. Whether π is normal or not is a fit subject for conjecture. A writer of science fiction may suppose anything she likes about its normality without current knowledge saying she’s wrong.

    It’s easy to imagine numbers that aren’t normal. Rational numbers aren’t, for example. If you followed my instructions and made your own transcendental number then you made a non-normal number. It’s possible that π should be non-normal. The first thirty million digits or so look good, though, if you think normal is good. But what’s thirty million against infinitely many possible counterexamples? For all we know, there comes a time when π runs out of interesting-looking digits and turns into an unpredictable little fluttering between 6 and 8.

    It’s hard to prove that any numbers we’d like to know about are normal. We don’t know about π. We don’t know about e, the base of the natural logarithm. We don’t know about the natural logarithm of 2. There is a proof that the square root of two (and other non-square whole numbers, like 3 or 5) is normal in base two. But my understanding is it’s a nonstandard approach that isn’t quite satisfactory to experts in the field. I’m not expert so I can’t say why it isn’t quite satisfactory. If the proof’s authors or grad students wish to quarrel with my characterization I’m happy to give space for their rebuttal.

    It’s much the way transcendental numbers were in the 19th century. We understand there to be this class of numbers that comprises nearly every number. We just don’t have many examples. But we’re still short on examples of transcendental numbers. Maybe we’re not that badly off with normal numbers.

    We can construct normal numbers. For example, there’s the Champernowne Constant. It’s the number you would make if you wanted to show you could make a normal number. It’s 0.12345678910111213141516171819202122232425 and I bet you can imagine how that develops from that point. (David Gawen Champernowne proved it was normal, which is the hard part.) There’s other ways to build normal numbers too, if you like. But those numbers aren’t of any interest except that we know them to be normal.

    Mere normality is tied to a base. A number might be normal in base ten (the way normal people write numbers) but not in base two or base sixteen (which computers and people working on computers use). It might be normal in base twelve, used by nobody except mathematics popularizers of the 1960s explaining bases, but not normal in base ten. There can be numbers normal in every base. They’re called “absolutely normal”. Nearly all real numbers are absolutely normal. Wacław Sierpiński constructed the first known absolutely normal number in 1917. If you got in on the fractals boom of the 80s and 90s you know his name, although without the Polish spelling. He did stuff with gaskets and curves and carpets you wouldn’t believe. I’ve never seen Sierpiński’s construction of an absolutely normal number. From my references I’m not sure if we know how to construct any other absolutely normal numbers.

    So that is the strange state of things. Nearly every real number is normal. Nearly every number is absolutely normal. We know a couple normal numbers. We know at least one absolutely normal number. But we haven’t (to my knowledge) proved any number that’s otherwise interesting is also a normal number. This is why I say: a normal number is any real number you never heard of.

    • gaurish 5:42 am on Saturday, 3 December, 2016 Permalink | Reply

      Beautiful exposition! Using pi as motivation for the discussion was a great idea. The fact that unlike pimality, normality is associated with base system involved, fascinated me when I first came across normal numbers. Thanks!

      Liked by 1 person

      • Joseph Nebus 4:48 pm on Friday, 9 December, 2016 Permalink | Reply

        Aw, thank you. You’re most kind. π is a good number to use for explaining so many kinds of numbers. It’s familiar to people and it feels friendly, but it’s still an example of so many of the most interesting traits of numbers. Or, as with normality, it looks like it probably is. It’s easy to see why the number is so fascinating.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Thursday, 1 December, 2016 Permalink | Reply
    Tags: , , , , ,   

    When Is Thanksgiving Most Likely To Happen? 

    So my question from last Thursday nagged at my mind. And I learned that Octave (a Matlab clone that’s rather cheaper) has a function that calculates the day of the week for any given day. And I spent longer than I would have expected fiddling with the formatting to get what I wanted to know.

    It turns out there are some days in November more likely to be the fourth Thursday than others are. (This is the current standard for Thanksgiving Day in the United States.) And as I’d suspected without being able to prove, this doesn’t quite match the breakdown of which months are more likely to have Friday the 13ths. That is, it’s more likely that an arbitrarily selected month will start on Sunday than any other day of the week. It’s least likely that an arbitrarily selected month will start on a Saturday or Monday. The difference is extremely tiny; there are only four more Sunday-starting months than there are Monday-starting months over the course of 400 years.

    But an arbitrary month is different from an arbitrary November. It turns out Novembers are most likely to start on a Sunday, Tuesday, or Thursday. And that makes the 26th, 24th, and 22nd the most likely days to be Thanksgiving. The 23rd and 25th are the least likely days to be Thanksgiving. Here’s the full roster, if I haven’t made any serious mistakes with it:

    November Will Be Thanksgiving
    22 58
    23 56
    24 58
    25 56
    26 58
    27 57
    28 57
    times in 400 years

    I don’t pretend there’s any significance to this. But it is another of those interesting quirks of probability. What you would say the probability is of a month starting on the 1st — equivalently, of having a Friday the 13th, or a Fourth Thursday of the Month that’s the 26th — depends on how much you know about the month. If you know only that it’s a month on the Gregorian calendar it’s one thing (specifically, it’s 688/4800, or about 0.14333). If you know only that it’s a November than it’s another (58/400, or 0.145). If you know only that it’s a month in 2016 then it’s another yet (1/12, or about 0.08333). If you know that it’s November 2016 then the probability is 0. Information does strange things to probability questions.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: