Tagged: infinity Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Sunday, 12 February, 2017 Permalink | Reply
    Tags: Agnes, , , , infinity, Lay Lines, , Pooch Cafe, Rabbits Against Magic,   

    Reading the Comics, February 6, 2017: Another Pictureless Half-Week Edition 

    Got another little flood of mathematically-themed comic strips last week and so once again I’ll split them along something that looks kind of middle-ish. Also this is another bunch of GoComics.com-only posts. Since those seem to be accessible to anyone whether or not they’re subscribers indefinitely far into the future I don’t feel like I can put the comics directly up and will trust you all to click on the links that you find interesting. Which is fine; the new GoComics.com design makes it annoyingly hard to download a comic strip. I don’t think that was their intention. But that’s one of the two nagging problems I have with their new design. So you know.

    Tony Cochran’s Agnes for the 5th sees a brand-new mathematics. Always dangerous stuff. But mathematicians do invent, or discover, new things in mathematics all the time. Part of the task is naming the things in it. That’s something which takes talent. Some people, such as Leonhard Euler, had the knack a great novelist has for putting names to things. The rest of us muddle along. Often if there’s any real-world inspiration, or resemblance to anything, we’ll rely on that. And we look for terminology that evokes similar ideas in other fields. … And, Agnes would like to know, there is mathematics that’s about approximate answers, being “right around” the desired answer. Unfortunately, that’s hard. (It’s all hard, if you’re going to take it seriously, much like everything else people do.)

    Scott Hilburn’s The Argyle Sweater for the 5th is the anthropomorphic numerals joke for this essay.

    Carol Lay’s Lay Lines for the 6th depicts the hazards of thinking deeply and hard about the infinitely large and the infinitesimally small. They’re hard. Our intuition seems well-suited to handing a modest bunch of household-sized things. Logic guides us when thinking about the infinitely large or small, but it takes a long time to get truly conversant and comfortable with it all.

    Paul Gilligan’s Pooch Cafe for the 6th sees Poncho try to argue there’s thermodynamical reasons for not being kind. Reasoning about why one should be kind (or not) is the business of philosophers and I won’t overstep my expertise. Poncho’s mathematics, that’s something I can write about. He argues “if you give something of yourself, you inherently have less”. That seems to be arguing for a global conservation of self-ness, that the thing can’t be created or lost, merely transferred around. That’s fair enough as a description of what the first law of thermodynamics tells us about energy. The equation he reads off reads, “the change in the internal energy (Δ U) equals the heat added to the system (U) minus the work done by the system (W)”. Conservation laws aren’t unique to thermodynamics. But Poncho may be aware of just how universal and powerful thermodynamics is. I’m open to an argument that it’s the most important field of physics.

    Jonathan Lemon’s Rabbits Against Magic for the 6th is another strip Intro to Calculus instructors can use for their presentation on instantaneous versus average velocities. There’s been a bunch of them recently. I wonder if someone at Comic Strip Master Command got a speeding ticket.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 6th is about numeric bases. They’re fun to learn about. There’s an arbitrariness in the way we represent concepts. I think we can understand better what kinds of problems seem easy and what kinds seem harder if we write them out different ways. But base eleven is only good for jokes.

    • davekingsbury 10:01 pm on Monday, 13 February, 2017 Permalink | Reply

      He argues “if you give something of yourself, you inherently have less”. That seems to be arguing for a global conservation of self-ness, that the thing can’t be created or lost, merely transferred around.

      How, I wonder, to marry that with Juliet’s declaration of love for Juliet?

      “My bounty is as boundless as the sea,
      My love as deep; the more I give to thee,
      The more I have, for both are infinite.”


      • Joseph Nebus 11:08 pm on Thursday, 16 February, 2017 Permalink | Reply

        Oh, well, infinities are just trouble no matter what. Anything can happen with them.

        I suppose there’s also the question of how the Banach-Tarski Paradox affects love.

        Liked by 1 person

    • Downpuppy (@Downpuppy) 12:30 am on Tuesday, 14 February, 2017 Permalink | Reply

      Agnes is the first Fuzzy Math reference I’ve seen in about 10 years.

      Squirrel Girl counted to 31 on one hand to defeat Count Nefario, but SMBC is more an ASL snub


      • Joseph Nebus 11:12 pm on Thursday, 16 February, 2017 Permalink | Reply

        I’m a little surprised fuzzy mathematics doesn’t get used for more comic strips, but I don’t suppose it lends itself to too many different jokes. On the other hand, neither does Pi Day and we’ll see a bunch of those over the coming month.

        I had expected, really, Saturday Morning Breakfast Cereal to go with 1,024 as a natural base if you use your hands in a particularly digit-efficient way.


  • Joseph Nebus 6:00 pm on Wednesday, 30 November, 2016 Permalink | Reply
    Tags: , , , , , , infinity, , Monster Group, , ,   

    The End 2016 Mathematics A To Z: Monster Group 

    Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

    Monster Group.

    It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

    The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

    All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

    So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

    (Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

    So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

    An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

    Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

    One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

    The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

    So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

    And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

    Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

    Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

    The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

    The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

    It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

    We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

    And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

    There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

    You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

    I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

    There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

    The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

    • gaurish 9:17 am on Saturday, 10 December, 2016 Permalink | Reply

      It’s a shame that I somehow missed this blog post. Have you read “Symmetry and the Monster,”? Will you recommend reading it?


      • Joseph Nebus 5:57 am on Saturday, 17 December, 2016 Permalink | Reply

        Not to fear. Given how I looked away a moment and got fourteen days behind writing comments I can’t fault anyone for missing a post or two here.

        I haven’t read Symmetry and the Monster, but from Dr Ronan’s web site about the Monster Group I’m interested and mean to get to it when I find a library copy. I keep getting farther behind in my reading, admittedly. Today I realized I’d rather like to read Dan Bouk’s How Our Days Became Numbered: Risk and the Rise of the Statistical Individual, which focuses in large part on the growth of the life insurance industry in the 19th century. And even so I just got a book about the sale of timing data that was so common back when standard time was being discovered-or-invented.


  • Joseph Nebus 6:00 pm on Monday, 28 November, 2016 Permalink | Reply
    Tags: , , , , infinity, local, Niagara Falls   

    The End 2016 Mathematics A To Z: Local 

    Today’s is another of those words that means nearly what you would guess. There are still seven letters left, by the way, which haven’t had any requested terms. If you’d like something described please try asking.


    Stops at every station, rather than just the main ones.

    OK, I’ll take it seriously.

    So a couple years ago I visited Niagara Falls, and I stepped into the river, just above the really big drop.

    A view (from the United States side) of the Niagara Falls. With a lot of falling water and somehow even more mist.

    Niagara Falls, demonstrating some locally unsafe waters to be in. Background: Canada (left), United States (right).

    I didn’t have any plans to go over the falls, and didn’t, but I liked the thrill of claiming I had. I’m not crazy, though; I picked a spot I knew was safe to step in. It’s only in the retelling I went into the Niagara River just above the falls.

    Because yes, there is surely danger in certain spots of the Niagara River. But there are also spots that are perfectly safe. And not isolated spots either. I wouldn’t have been less safe if I’d stepped into the river a few feet closer to the edge. Nor if I’d stepped in a few feet farther away. Where I stepped in was locally safe.

    Speedy but not actually turbulent waters on the Niagara River, above the falls.

    The Niagara River, and some locally safe enough waters to be in. That’s not me in the picture; if you do know who it is, I have no way of challenging you. But it’s the area I stepped into and felt this lovely illicit thrill doing so.

    Over in mathematics we do a lot of work on stuff that’s true or false depending on what some parameters are. We can look at bunches of those parameters, and they often look something like normal everyday space. There’s some values that are close to what we started from. There’s others that are far from that.

    So, a “neighborhood” of some point is that point and some set of points containing it. It needs to be an “open” set, which means it doesn’t contain its boundary. So, like, everything less than one minute’s walk away, but not the stuff that’s precisely one minute’s walk away. (If we include boundaries we break stuff that we don’t want broken is why.) And certainly not the stuff more than one minute’s walk away. A neighborhood could have any shape. It’s easy to think of it as a little disc around the point you want. That’s usually the easiest to describe in a proof, because it’s “everything a distance less than (something) away”. (That “something” is either ‘δ’ or ‘ε’. Both Greek letters are called in to mean “a tiny distance”. They have different connotations about what the tiny distance is in.) It’s easiest to draw as little amoeba-like blob around a point, and contained inside a bigger amoeba-like blob.

    Anyway, something is true “locally” to a point if it’s true in that neighborhood. That means true for everything in that neighborhood. Which is what you’d expect. “Local” means just that. It’s the stuff that’s close to where we started out.

    Often we would like to know something “globally”, which means … er … everywhere. Universally so. But it’s usually easier to prove a thing locally. I suppose having a point where we know something is so makes it easier to prove things about what’s nearby. Distant stuff, who knows?

    “Local” serves as an adjective for many things. We think of a “local maximum”, for example, or “local minimum”. This is where whatever we’re studying has a value bigger (or smaller) than anywhere else nearby has. Or we speak of a function being “locally continuous”, meaning that we know it’s continuous near this point and we make no promises away from it. It might be “locally differentiable”, meaning we can take derivatives of it close to some interesting point. We say nothing about what happens far from it.

    Unless we do. We can talk about something being “local to infinity”. Your first reaction to that should probably be to slap the table and declare that’s it, we’re done. But we can make it sensible, at least to other mathematicians. We do it by starting with a neighborhood that contains the origin, zero, that point in the middle of everything. So, what’s the inverse of that? It’s everything that’s far enough away from the origin. (Don’t include the boundary, we don’t need those headaches.) So why not call that the “neighborhood of infinity”? Other than that it’s a weird set of words to put together? And if something is true in that “neighborhood of infinity”, what is that thing other than true “local to infinity”?

    I don’t blame you for being skeptical.

  • Joseph Nebus 6:00 pm on Monday, 7 November, 2016 Permalink | Reply
    Tags: , , , , , infinity, ,   

    The End 2016 Mathematics A To Z: Cantor’s Middle Third 

    Today’s term is a request, the first of this series. It comes from HowardAt58, head of the Saving School Math blog. There are many letters not yet claimed; if you have a term you’d like to see my write about please head over to the “Any Requests?” page and pick a letter. Please not one I figure to get to in the next day or two.

    Cantor’s Middle Third.

    I think one could make a defensible history of mathematics by describing it as a series of ridiculous things that get discovered. And then, by thinking about these ridiculous things long enough, mathematicians come to accept them. Even rely on them. Sometime later the public even comes to accept them. I don’t mean to say getting people to accept ridiculous things is the point of mathematics. But there is a pattern which happens.

    Consider. People doing mathematics came to see how a number could be detached from a count or a measure of things. That we can do work on, say, “three” whether it’s three people, three kilograms, or three square meters. We’re so used to this it’s only when we try teaching mathematics to the young we realize it isn’t obvious.

    Or consider that we can have, rather than a whole number of things, a fraction. Some part of a thing, as if you could have one-half pieces of chalk or two-thirds a fruit. Counting is relatively obvious; fractions are something novel but important.

    We have “zero”; somehow, the lack of something is still a number, the way two or five or one-half might be. For that matter, “one” is a number. How can something that isn’t numerous be a number? We’re used to it anyway. We can have not just fraction and one and zero but irrational numbers, ones that can’t be represented as a fraction. We have negative numbers, somehow a lack of whatever we were counting so great that we might add some of what we were counting to the pile and still have nothing.

    That takes us up to about eight hundred years ago or something like that. The public’s gotten to accept all this as recently as maybe three hundred years ago. They’ve still got doubts. I don’t blame folks. Complex numbers mathematicians like; the public’s still getting used to the idea, but at least they’ve heard of them.

    Cantor’s Middle Third is part of the current edge. It’s something mathematicians are aware of and that defies sense at least. But we’ve come to accept it. The public, well, they don’t know about it. Maybe some do; it turns up in pop mathematics books that like sharing the strangeness of infinities. Few people read them. Sometimes it feels like all those who do go online to tell mathematicians they’re crazy. It comes to us, as you might guess from the name, from Georg Cantor. Cantor established the modern mathematical concept of how to study infinitely large sets in the late 19th century. And he was repeatedly hospitalized for depression. It’s cruel to write all that off as “and he was crazy”. His work’s withstood a hundred and thirty-five years of extremely smart people looking at it skeptically.

    The Middle Third starts out easily enough. Take a line segment. Then chop it into three equal pieces and throw away the middle third. You see where the name comes from. What do you have left? Some of the original line. Two-thirds of the original line length. A big gap in the middle.

    Now take the two line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the two pieces. Now we’re left with four chunks of line and four-ninths of the original length. One big and two little gaps in the middle.

    Now take the four little line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the four pieces. We’re left with eight chunks of line, about eight-twenty-sevenths of the original length. Lots of little gaps. Keep doing this, chopping up line segments and throwing away middle pieces. Never stop. Well, pretend you never stop and imagine what’s left.

    What’s left is deeply weird. What’s left has no length, no measure. That’s easy enough to prove. But we haven’t thrown everything away. There are bits of the original line segment left over. The left endpoint of the original line is left behind. So is the right endpoint of the original line. The endpoints of the line segments after the first time we chopped out a third? Those are left behind. The endpoints of the line segments after chopping out a third the second time, the third time? Those have to be in the set. We have a dust, isolated little spots of the original line, none of them combining together to cover any length. And there are infinitely many of these isolated dots.

    We’ve seen that before. At least we have if we’ve read anything about the Cantor Diagonal Argument. You can find that among the first ten posts of every mathematics blog. (Not this one. I was saving the subject until I had something good to say about it. Then I realized many bloggers have covered it better than I could.) Part of it is pondering how there can be a set of infinitely many things that don’t cover any length. The whole numbers are such a set and it seems reasonable they don’t cover any length. The rational numbers, though, are also an infinitely-large set that doesn’t cover any length. And there’s exactly as many rational numbers as there are whole numbers. This is unsettling but if you’re the sort of person who reads about infinities you come to accept it. Or you get into arguments with mathematicians online and never know you’ve lost.

    Here’s where things get weird. How many bits of dust are there in this middle third set? It seems like it should be countable, the same size as the whole numbers. After all, we pick up some of these points every time we throw away a middle third. So we double the number of points left behind every time we throw away a middle third. That’s countable, right?

    It’s not. We can prove it. The proof looks uncannily like that of the Cantor Diagonal Argument. That’s the one that proves there are more real numbers than there are whole numbers. There are points in this leftover set that were not endpoints of any of these middle-third excerpts. This dust has more points in it than there are rational numbers, but it covers no length.

    (I don’t know if the dust has the same size as the real numbers. I suspect it’s unproved whether it has or hasn’t, because otherwise I’d surely be able to find the answer easily.)

    It’s got other neat properties. It’s a fractal, which is why someone might have heard of it, back in the Great Fractal Land Rush of the 80s and 90s. Look closely at part of this set and it looks like the original set, with bits of dust edging gaps of bigger and smaller sizes. It’s got a fractal dimension, or “Hausdorff dimension” in the lingo, that’s the logarithm of two divided by the logarithm of three. That’s a number actually known to be transcendental, which is reassuring. Nearly all numbers are transcendental, but we only know a few examples of them.

    HowardAt58 asked me about the Middle Third set, and that’s how I’ve referred to it here. It’s more often called the “Cantor set” or “Cantor comb”. The “comb” makes sense because if you draw successive middle-thirds-thrown-away, one after the other, you get something that looks kind of like a hair comb, if you squint.

    You can build sets like this that aren’t based around thirds. You can, for example, develop one by cutting lines into five chunks and throw away the second and fourth. You get results that are similar, and similarly heady, but different. They’re all astounding. They’re all hard to believe in yet. They may get to be stuff we just accept as part of how mathematics works.

  • Joseph Nebus 6:00 pm on Sunday, 23 October, 2016 Permalink | Reply
    Tags: cosines, infinity, math anxiety, , ,   

    Reading the Comics, October 19, 2016: An Extra Day Edition 

    I didn’t make noise about it, but last Sunday’s mathematics comic strip roundup was short one day. I was away from home and normal computer stuff Saturday. So I posted without that day’s strips under review. There was just the one, anyway.

    Also I want to remind folks I’m doing another Mathematics A To Z, and taking requests for words to explain. There are many appealing letters still unclaimed, including ‘A’, ‘T’, and ‘O’. Please put requests in over on that page because. It’s easier for me to keep track of what’s been claimed that way.

    Matt Janz’s Out of the Gene Pool rerun for the 15th missed last week’s cut. It does mention the Law of Cosines, which is what the Pythagorean Theorem looks like if you don’t have a right triangle. You still have to have a triangle. Bobby-Sue recites the formula correctly, if you know the notation. The formula’s c^2 = a^2 + b^2 - 2 a b \cos\left(C\right) . Here ‘a’ and ‘b’ and ‘c’ are the lengths of legs of the triangle. ‘C’, the capital letter, is the size of the angle opposite the leg with length ‘c’. That’s a common notation. ‘A’ would be the size of the angle opposite the leg with length ‘a’. ‘B’ is the size of the angle opposite the leg with length ‘b’. The Law of Cosines is a generalization of the Pythagorean Theorem. It’s a result that tells us something like the original theorem but for cases the original theorem can’t cover. And if it happens to be a right triangle the Law of Cosines gives us back the original Pythagorean Theorem. In a right triangle C is the size of a right angle, and the cosine of that is 0.

    That said Bobby-Sue is being fussy about the drawings. No geometrical drawing is ever perfectly right. The universe isn’t precise enough to let us draw a right triangle. Come to it we can’t even draw a triangle, not really. We’re meant to use these drawings to help us imagine the true, Platonic ideal, figure. We don’t always get there. Mock proofs, the kind of geometric puzzle showing something we know to be nonsense, rely on that. Give chalkboard art a break.

    Samson’s Dark Side of the Horse for the 17th is the return of Horace-counting-sheep jokes. So we get a π joke. I’m amused, although I couldn’t sleep trying to remember digits of π out quite that far. I do better working out Collatz sequences.

    Hilary Price’s Rhymes With Orange for the 19th at least shows the attempt to relieve mathematics anxiety. I’m sympathetic. It does seem like there should be ways to relieve this (or any other) anxiety, but finding which ones work, and which ones work best, is partly a mathematical problem. As often happens with Price’s comics I’m particularly tickled by the gag in the title panel.

    The Help Session ('Be sure to show your work'). 'It's simple --- if 3 deep breaths take 4.2 seconds, and your dread to confidence ratio is 2:1, how long will it take to alleviate your math anxiety?'

    Hilary Price’s Rhymes With Orange for the 19th of October, 2016. I don’t think there’s enough data given to solve the problem. But it’s a start at least. Start by making a note of it on your suspiciously large sheet of paper.

    Norm Feuti’s Gil rerun for the 19th builds on the idea calculators are inherently cheating on arithmetic homework. I’m sympathetic to both sides here. If Gil just wants to know that his answers are right there’s not much reason not to use a calculator. But if Gil wants to know that he followed the right process then the calculator’s useless. By the right process I mean, well, the work to be done. Did he start out trying to calculate the right thing? Did he pick an appropriate process? Did he carry out all the steps in that process correctly? If he made mistakes on any of those he probably didn’t get to the right answer, but it’s not impossible that he would. Sometimes multiple errors conspire and cancel one another out. That may not hurt you with any one answer, but it does mean you aren’t doing the problem right and a future problem might not be so lucky.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal rerun for the 19th has God crashing a mathematics course to proclaim there’s a largest number. We can suppose there is such a thing. That’s how arithmetic modulo a number is done, for one. It can produce weird results in which stuff we just naturally rely on doesn’t work anymore. For example, in ordinary arithmetic we know that if one number times another equals zero, then either the first number or the second, or both, were zero. We use this in solving polynomials all the time. But in arithmetic modulo 8 (say), 4 times 2 is equal to 0.

    And if we recklessly talk about “infinity” as a number then we get outright crazy results, some of them teased in Weinersmith’s comic. “Infinity plus one”, for example, is “infinity”. So is “infinity minus one”. If we do it right, “infinity minus infinity” is “infinity”, or maybe zero, or really any number you want. We can avoid these logical disasters — so far, anyway — by being careful. We have to understand that “infinity” is not a number, though we can use numbers growing infinitely large.

    Induction, meanwhile, is a great, powerful, yet baffling form of proof. When it solves a problem it solves it beautifully. And easily, too, usually by doing something like testing two special cases. Maybe three. At least a couple special cases of whatever you want to know. But picking the cases, and setting them up so that the proof is valid, is not easy. There’s logical pitfalls and it is so hard to learn how to avoid them.

    Jon Rosenberg’s Scenes from a Multiverse for the 19th plays on a wonderful paradox of randomness. Randomness is … well, unpredictable. If I tried to sell you a sequence of random numbers and they were ‘1, 2, 3, 4, 5, 6, 7’ you’d be suspicious at least. And yet, perfect randomness will sometimes produce patterns. If there were no little patches of order we’d have reason to suspect the randomness was faked. There is no reason that a message like “this monkey evolved naturally” couldn’t be encoded into a genome by chance. It may just be so unlikely we don’t buy it. The longer the patch of order the less likely it is. And yet, incredibly unlikely things do happen. The study of impossibly unlikely events is a good way to quickly break your brain, in case you need one.

  • Joseph Nebus 6:00 pm on Sunday, 25 September, 2016 Permalink | Reply
    Tags: , , , infinity,   

    Reading the Comics, September 24, 2016: Infinities Happen Edition 

    I admit it’s a weak theme. But two of the comics this week give me reason to talk about infinitely large things and how the fact of being infinitely large affects the probability of something happening. That’s enough for a mid-September week of comics.

    Kieran Meehan’s Pros and Cons for the 18th of September is a lottery problem. There’s a fun bit of mathematical philosophy behind it. Supposing that a lottery runs long enough without changing its rules, and that it does draw its numbers randomly, it does seem to follow that any valid set of numbers will come up eventually. At least, the probability is 1 that the pre-selected set of numbers will come up if the lottery runs long enough. But that doesn’t mean it’s assured. There’s not any law, physical or logical, compelling every set of numbers to come up. But that is exactly akin to tossing a coin fairly infinity many times and having it come up tails every single time. There’s no reason that can’t happen, but it can’t happen.

    'It's true, Dr Peel. I'm a bit of a psychic.' 'Would you share the winning lottery numbers with me?' '1, 10, 17, 39, 43, and 47'. 'Those are the winning lottery numbers?' 'Yes!' 'For this Tuesday?' 'Ah! That's where it gets a bit fuzzy.'

    Kieran Meehan’s Pros and Cons for the 18th of September, 2016. I can’t say whether any of these are supposed to be the PowerBall number. (The comic strip’s title is a revision of its original, which more precisely described its gimmick but was harder to remember: A Lawyer, A Doctor, and a Cop.)

    Leigh Rubin’s Rubes for the 19th name-drops chaos theory. It’s wordplay, as of course it is, since the mathematical chaos isn’t the confusion-and-panicky-disorder of the colloquial term. Mathematical chaos is about the bizarre idea that a system can follow exactly perfectly known rules, and yet still be impossible to predict. Henri Poincaré brought this disturbing possibility to mathematicians’ attention in the 1890s, in studying the question of whether the solar system is stable. But it lay mostly fallow until the 1960s when computers made it easy to work this out numerically and really see chaos unfold. The mathematician type in the drawing evokes Einstein without being too close to him, to my eye.

    Allison Barrows’s PreTeena rerun of the 20th shows some motivated calculations. It’s always fun to see people getting excited over what a little multiplication can do. Multiplying a little change by a lot of chances is one of the ways to understanding integral calculus, and there’s much that’s thrilling in that. But cutting four hours a night of sleep is not a little thing and I wouldn’t advise it for anyone.

    Jason Poland’s Robbie and Bobby for the 20th riffs on Jorge Luis Borges’s Library of Babel. It’s a great image, the idea of the library containing every book possible. And it’s good mathematics also; it’s a good way to probe one’s understanding of infinity and of probability. Probably logic, also. After all, grant that the index to the Library of Babel is a book, and therefore in the library somehow. How do you know you’ve found the index that hasn’t got any errors in it?

    Ernie Bushmiller’s Nancy Classics for the 21st originally ran the 21st of September, 1949. It’s another example of arithmetic as a proof of intelligence. Routine example, although it’s crafted with the usual Bushmiller precision. Even the close-up, peering-into-your-soul image if Professor Stroodle in the second panel serves the joke; without it the stress on his wrinkled brow would be diffused. I can’t fault anyone not caring for the joke; it’s not much of one. But wow is the comic strip optimized to deliver it.

    Thom Bluemel’s Birdbrains for the 23rd is also a mathematics-as-proof-of-intelligence strip, although this one name-drops calculus. It’s also a strip that probably would have played better had it come out before Blackfish got people asking unhappy questions about Sea World and other aquariums keeping large, deep-ocean animals. I would’ve thought Comic Strip Master Command to have sent an advisory out on the topic.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 23rd is, among other things, a guide for explaining the difference between speed and velocity. Speed’s a simple number, a scalar in the parlance. Velocity is (most often) a two- or three-dimensional vector, a speed in some particular direction. This has implications for understanding how things move, such as pedestrians.

  • Joseph Nebus 6:00 pm on Wednesday, 21 September, 2016 Permalink | Reply
    Tags: , infinity, L'Hopital's Rule, ,   

    L’Hopital’s Rule Without End: Is That A Thing? 

    I was helping a friend learn L’Hôpital’s Rule. This is a Freshman Calculus thing. (A different one from last week, it happens. Folks are going back to school, I suppose.) The friend asked me a point I thought shouldn’t come up. I’m certain it won’t come up in the exam my friend was worried about, but I couldn’t swear it wouldn’t happen at all. So this is mostly a note to myself to think it over and figure out whether the trouble could come up. And also so this won’t be my most accessible post; I’m sorry for that, for folks who aren’t calculus-familiar.

    L’Hôpital’s Rule is a way of evaluating the limit of one function divided by another, of f(x) divided by g(x). If the limit of \frac{f(x)}{g(x)} has either the form of \frac{0}{0} or \frac{\infty}{\infty} then you’re not stuck. You can take the first derivative of the numerator and the denominator separately. The limit of \frac{f'(x)}{g'(x)} if it exists will be the same value.

    But it’s possible to have to do this several times over. I used the example of finding the limit, as x grows infinitely large, where f(x) = x2 and g(x) = ex. \frac{x^2}{e^x} goes to \frac{\infty}{\infty} as x grows infinitely large. The first derivatives, \frac{2x}{e^x} , also go to \frac{\infty}{\infty} . You have to repeat the process again, taking the first derivatives of numerator and denominator again. \frac{2}{e^x} finally goes to 0 as x gets infinitely large. You might have to do this a bunch of times. If f(x) were x7 and g(x) again ex you’d properly need to do this seven times over. With experience you figure out you can skip some steps. Of course students don’t have the experience to know they can skip ahead to the punch line there, but that’s what the practice in homework is for.

    Anyway, my friend asked whether it’s possible to get a pattern that always ends up with \frac{0}{0} or \frac{\infty}{\infty} and never breaks out of this. And that’s what’s got me stuck. I can think of a few patterns that would. Start out, for example, with f(x) = e3x and g(x) = e2x. Properly speaking, that would never end. You’d get an infinity-over-infinity pattern every derivative you took. Similarly, if you started with f(x) = \frac{1}{x} and g(x) = e^{-x} you’d never come to an end. As x got infinitely large both f(x) and g(x) would go to zero and all their derivatives would be zero over and over and over and over again.

    But those are special cases. Anyone looking at what they were doing instead of just calculating would look at, say, \frac{e^{3x}}{e^{2x}} and realize that’s the same as e^x which falls out of the L’Hôpital’s Rule formulas. Or \frac{\frac{1}{x}}{e^{-x}} would be the same as \frac{e^x}{x} which is an infinity-over-infinity form. But it takes only one derivative to break out of the infinity-over-infinity pattern.

    So I can construct examples that never break out of a zero-over-zero or an infinity-over-infinity pattern if you calculate without thinking. And calculating without thinking is a common problem students have. Arguably it’s the biggest problem mathematics students have. But what I wonder is, are there ratios that end up in an endless zero-over-zero or infinity-over-infinity pattern even if you do think it out?

    And thus this note; I’d like to nag myself into thinking about that.

  • Joseph Nebus 6:00 pm on Sunday, 10 July, 2016 Permalink | Reply
    Tags: , , , , infinity, , , scams   

    Reading the Comics, July 6, 2016: Another Busy Week Edition 

    It’s supposed to be the summer vacation. I don’t know why Comic Strip Master Command is so eager to send me stuff. Maybe my standards are too loose. This doesn’t even cover all of last week’s mathematically-themed comics. I’ll need another that I’ve got set for Tuesday. I don’t mind.

    Corey Pandolph and Phil Frank and Joe Troise’s The Elderberries rerun for the 3rd features one of my favorite examples of applied probability. The game show Deal or No Deal offered contestants the prize within a suitcase they picked, or a dealer’s offer. The offer would vary up or down as non-selected suitcases were picked, giving the chance for people to second-guess themselves. It also makes a good redemption game. The banker’s offer would typically be less than the expectation value, what you’d get on average from all the available suitcases. But now and then the dealer offered more than the expectation value and I got all ready to yell at the contestants.

    This particular strip focuses on a smaller question: can you pick which of the many suitcases held the grand prize? And with the right setup, yes, you can pick it reliably.

    Mac King and Bill King’s Magic in a Minute for the 3rd uses a bit of arithmetic to support a mind-reading magic trick. The instructions say to start with a number from 1 to 10 and do various bits of arithmetic which lead inevitably to 4. You can prove that for an arbitrary number, or you can just try it for all ten numbers. That’s tedious but not hard and it’ll prove the inevitability of 4 here. There aren’t many countries with names that start with ‘D’; Denmark’s surely the one any American (or European) reader is likeliest to name. But Dominica, the Dominican Republic, and Djibouti would also be answers. (List Of Countries Of The World.com also lists Dhekelia, which I never heard of either.) Anyway, with Denmark forced, ‘E’ almost begs for ‘elephant’. I suppose ’emu’ would do too, or ‘echidna’. And ‘elephant’ almost forces ‘grey’ for a color, although ‘white’ would be plausible too. A magician has to know how things like this work.

    Werner Wejp-Olsen’s feature Inspector Danger’s Crime Quiz for the 4th features a mathematician as victim of the day’s puzzle murder. I admit I’m skeptical of deathbed identifications of murderers like this, but it would spoil a lot of puzzle mysteries if we disallowed them. (Does anyone know how often a deathbed identification actually happens?) I can’t make the alleged answer make any sense to me. Danger of the trade in murder puzzles.

    Kris Straub’s Starship for the 4th uses mathematics as a stand-in for anything that’s hard to study and solve. I’m amused.

    Edison Lee tells his grandfather how if the universe is infinitely large, then it's possible there's an exact duplicate to him. This makes his grandfather's head explode. Edison's father says 'I see you've been pondering incomprehensibles again'. Headless grandfather says 'I need a Twinkie.'

    John Hambrock’s The Brilliant Mind of Edison lee for the 6th of July, 2016. I’m a little surprised the last panel wasn’t set on a duplicate Earth where things turned out a little differently.

    John Hambrock’s The Brilliant Mind of Edison lee for the 6th is about the existentialist dread mathematics can inspire. Suppose there is a chance, within any given volume of space, of Earth being made. Well, it happened at least once, didn’t it? If the universe is vast enough, it seems hard to argue that there wouldn’t be two or three or, really, infinitely many versions of Earth. It’s a chilling thought. But it requires some big suppositions, most importantly that the universe actually is infinite. The observable universe, the one we can ever get a signal from, certainly isn’t. The entire universe including the stuff we can never get to? I don’t know that that’s infinite. I wouldn’t be surprised if it’s impossible to say, for good reason. Anyway, I’m not worried about it.

    Jim Meddick’s Monty for the 6th is part of a storyline in which Monty is worshipped by tiny aliens who resemble him. They’re a bit nerdy, and calculate before they understand the relevant units. It’s a common mistake. Understand the problem before you start calculating.

  • Joseph Nebus 3:00 pm on Tuesday, 28 June, 2016 Permalink | Reply
    Tags: , , , infinity, ,   

    Reading the Comics, June 25, 2016: What The Heck, Why Not Edition 

    I had figured to do Reading the Comics posts weekly, and then last week went and gave me too big a flood of things to do. I have no idea what the rest of this week is going to look like. But given that I had four strips dated before last Sunday I’m going to err on the side of posting too much about comic strips.

    Scott Metzger’s The Bent Pinky for the 24th uses mathematics as something that dogs can be adorable about not understanding. Thus all the heads tilted, as if it were me in a photograph. The graph here is from economics, which has long had a challenging relationship with mathematics. This particular graph is qualitative; it doesn’t exactly match anything in the real world. But it helps one visualize how we might expect changes in the price of something to affect its sales. A graph doesn’t need to be precise to be instructional.

    Dave Whamond’s Reality Check for the 24th is this essay’s anthropomorphic-numerals joke. And it’s a reminder that something can be quite true without being reassuring. It plays on the difference between “real” numbers and things that really exist. It’s hard to think of a way that a number such as two could “really” exist that doesn’t also allow the square root of -1 to “really” exist.

    And to be a bit curmudgeonly, it’s a bit sloppy to speak of “the square root of negative one”, even though everyone does. It’s all right to expand the idea of square roots to cover stuff it didn’t before. But there’s at least two numbers that would, squared, equal -1. We usually call them i and -i. Square roots naturally have this problem,. Both +2 and -2 squared give us 4. We pick out “the” square root by selecting the positive one of the two. But neither i nor -i is “positive”. (Don’t let the – sign fool you. It doesn’t count.) You can’t say either i or -i is greater than zero. It’s not possible to define a “greater than” or “less than” for complex-valued numbers. And that’s even before we get into quaternions, in which we summon two more “square roots” of -1 into existence. Octonions can be even stranger. I don’t blame 1 for being worried.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 24th is a pleasant bit of pop-mathematics debunking. I’ve explained in the past how I’m a doubter of the golden ratio. The Fibonacci Sequence has a bit more legitimate interest to it. That’s sequences of numbers in which the next term is the sum of the previous two terms. The famous one of that is 1, 1, 2, 3, 5, 8, 13, 21, et cetera. It may not surprise you to know that the Fibonacci Sequence has a link to the golden ratio. As it goes on, the ratio between one term and the next one gets close to the golden ratio.

    The Harmonic Series is much more deeply weird. A series is the number we get from adding together everything in a sequence. The Harmonic Series grows out of the first sequence you’d imagine ever adding up. It’s 1 plus 1/2 plus 1/3 plus 1/4 plus 1/5 plus 1/6 plus … et cetera. The first time you hear of this you get the surprise: this sum doesn’t ever stop piling up. We say it ‘diverges’. It won’t on your computer; the floating-point arithmetic it does won’t let you add enormous numbers like ‘1’ to tiny numbers like ‘1/531,325,263,953,066,893,142,231,356,120’ and get the right answer. But if you actually added this all up, it would.

    The proof gets a little messy. But it amounts to this: 1/2 plus 1/3 plus 1/4? That’s more than 1. 1/5 + 1/6 + 1/7 + 1/8 + 1/9 + 1/10 + 1/11 + 1/12? That’s also more than 1. 1/13 + 1/14 + 1/15 + et cetera up through + 1/32 + 1/33 + 1/34 is also more than 1. You need to pile up more and more terms each time, but a finite string of these numbers will add up to more than 1. So the whole series has to be more than 1 + 1 + 1 + 1 + 1 … and so more than any finite number.

    That’s all amazing enough. And then the series goes on to defy all kinds of intuition. Obviously dropping a couple of terms from the series won’t change whether it converges or diverges. Multiplying alternating terms by -1, so you have (say) 1 – 1/2 + 1/3 – 1/4 + 1/5 et cetera produces something that looks like it converges. It equals the natural logarithm of 2. But if you take those terms and rearrange them, you can produce any real number, positive or negative, that you want.

    And, as Weinersmith describes here, if you just skip the correct set of terms, you can make the sum converge. The ones with 9 in the denominator will be, then, 1/9, 1/19, 1/29, 1/90, 1/91, 1/92, 1/290, 1/999, those sorts of things. Amazing? Yes. Absurd? I suppose so. This is why mathematicians learn to be very careful when they do anything, even addition, infinitely many times.

    John Deering’s Strange Brew for the 25th is a fear-of-mathematics joke. The sign the warrior’s carrying is legitimate algebra, at least so far as it goes. The right-hand side of the equation gets cut off. In time, it would get to the conclusion that x equals –19/2, or -9.5.

  • Joseph Nebus 3:00 pm on Sunday, 5 June, 2016 Permalink | Reply
    Tags: , infinity, ,   

    Reading the Comics, June 3, 2016: Word Problems Without Pictures Edition 

    I haven’t got Sunday’s comics under review yet. But the past seven days were slow ones for mathematically-themed comics. Maybe Comic Strip Master Command is under the impression that it’s the (United States) summer break already. It’s not, although Funky Winkerbean did a goofy sequence graduating its non-player-character students. And Zits has been doing a summer reading storyline that only makes sense if Jeremy Duncan is well into summer. Maybe Comic Strip Master Command thinks it’s a month later than it actually is?

    Tony Cochrane’s Agnes for the 29th of May looks at first like a bit of nonsense wordplay. But whether a book with the subject “All About Books” would discuss itself, and how it would discuss itself, is a logic problem. And not just a logic problem. Start from pondering how the book All About Books would describe the content of itself. You can go from that to an argument that it’s impossible to compress every possible message. Imagine an All About Books which contained shorthand descriptions of every book. And the descriptions have enough detail to exactly reconstruct each original book. But then what would the book list for the description of All About Books?

    And self-referential things can lead to logic paradoxes swiftly. You’d have some fine ones if Agnes were to describe a book All About Not-Described Books. Is the book described in itself? The question again sounds silly. But thinking seriously about it leads us to the decidability problem. Any interesting-enough logical system will always have statements that are meaningful and true that no one can prove.

    Furthermore, the suggestion of an “All About `All About Books’ Book” suggests to me power sets. That’s the set of all the ways you can collect the elements of a set. Power sets are always bigger than the original set. They lead to the staggering idea that there are many sizes of infinitely large sets, a never-ending stack of bigness.

    Robb Armstrong’s Jump Start for the 31st of May is part of a sequence about getting a tutor for a struggling kid. That it’s mathematics is incidental to the storyline, must be said. (It’s an interesting storyline, partly about the Jojo’s father, a police officer, coming to trust Ray, an ex-convict. Jump Start tells many interesting and often deeply weird storylines. And it never loses its camouflage of being an ordinary family comic strip.) It uses the familiar gimmick of motivating a word problem by making it about something tangible.

    Ken Cursoe’s Tiny Sepuku for the 2nd of June uses the motif of non-Euclidean geometry as some supernatural magic. It’s a small reference, you might miss it. I suppose it is true that a high-dimensional analogue to conic sections would focus things from many dimensions. If those dimensions match time and space, maybe it would focus something from all humanity into the brain. I would try studying instead, though.

    Russell Myers’s Broom Hilda for the 3rd is a resisting-the-word-problems joke. It’s funny to figure on missing big if you have to be wrong at all. But something you learn in numerical mathematics, particularly, is that it’s all right to start from a guess. Often you can take a wrong answer and improve it. If you can’t get the exact right answer, you can usually get a better answer. And often you can get as good as you need. So in practice, sorry to say, I can’t recommend going for the ridiculous answer. You can do better.

    • seaangel4444 4:41 pm on Sunday, 5 June, 2016 Permalink | Reply

      LOL I love the Broom Hilda cartoon, Joseph! And here I am, “smiling”! :) Cher xo


      • Joseph Nebus 2:56 am on Saturday, 11 June, 2016 Permalink | Reply

        Aw, quite glad you like. I do enjoy doing these comic strip reviews, partly for the chance to talk about subjects, partly because people get to see strips they hadn’t noticed before.

        Liked by 1 person

  • Joseph Nebus 3:00 pm on Friday, 15 April, 2016 Permalink | Reply
    Tags: , countability, , infinity, , , power sets,   

    A Leap Day 2016 Mathematics A To Z: Uncountable 

    I’m drawing closer to the end of the alphabet. While I have got choices for ‘V’ and ‘W’ set, I’ll admit that I’m still looking for something that inspires me in the last couple letters. Such inspiration might come from anywhere. HowardAt58, of that WordPress blog, gave me the notion for today’s entry.


    What are we doing when we count things?

    Maybe nothing. We might be counting just to be doing something. Or we might be counting because we want to do nothing. Counting can be a good way into a restful state. Fair enough. Just because we do something doesn’t mean we care about the result.

    Suppose we do care about the result of our counting. Then what is it we do when we count? The mechanism is straightforward enough. We pick out things and say, or imagine saying, “one, two, three, four,” and so on. Or we at least imagine the numbers along with the things being numbered. When we run out of things to count, we take whatever the last number was. That’s how many of the things there were. Why are there eight light bulbs in the chandelier fixture above the dining room table? Because there are not nine.

    That’s how lay people count anyway. Mathematicians would naturally have a more sophisticated view of the business. A much more powerful counting scheme. Concepts in counting that go far beyond what you might work out in first grade.

    Yeah, so that’s what most of us would figure. Things don’t get much more sophisticated than that, though. This probably is because the idea of counting is tied to the theory of sets. And the theory of sets grew, in part, to come up with a logically solid base for arithmetic. So many of the key ideas of set theory are so straightforward they hardly seem to need explaining.

    We build the idea of “countable” off of the nice, familiar numbers 1, 2, 3, and so on. That set’s called the counting numbers. They’re the numbers that everybody seems to recognize as numbers. Not just people. Even animals seem to understand at least the first couple of counting numbers. Sometimes these are called the natural numbers.

    Take a set of things we want to study. We’re interested in whether we can match the things in that set one-to-one with the things in the counting numbers. We don’t have to use all the counting numbers. But we can’t use the same counting number twice. If we’ve matched one chandelier light bulb with the number ‘4’, we mustn’t match a different bulb with the same number. Similarly, if we’ve got the number ‘4’ matched to one bulb, we mustn’t match ‘4’ with another bulb at the same time.

    If we can do this, then our set’s countable. If we really wanted, we could pick the counting numbers in order, starting from 1, and match up all the things with counting numbers. If we run out of things, then we have a finitely large set. The last number we used to match anything up with anything is the size, or in the jargon, the cardinality of our set. We might not care about the cardinality, just whether the set is finite. Then we can pick counting numbers as we like in no particular order. Just use whatever’s convenient.

    But what if we don’t run out of things? And it’s possible we won’t. Suppose our set is the negative whole numbers: -1, -2, -3, -4, -5, and so on. We can match each of those to a counting number many ways. We always can. But there’s an easy way. Match -1 to 1, match -2 to 2, match -3 to 3, and so on. Why work harder than that? We aren’t going to run out of negative whole numbers. And we aren’t going to find any we can’t match with some counting number. And we aren’t going to have to match two different negative numbers to the same counting number. So what we have here is an infinitely large, yet still countable, set.

    So a set of things can be countable and finite. It can be countable and infinite. What else is there to be?

    There must be something. It’d be peculiar to have a classification that everything was in, after all. At least it would be peculiar except for people studying what it means to exist or to not exist. And most of those people are in the philosophy department, where we’re scared of visiting. So we must mean there’s some such thing as an uncountable set.

    The idea means just what you’d guess if you didn’t know enough mathematics to be tricky. Something is uncountable if it can’t be counted. It can’t be counted if there’s no way to match it up, one thing-to-one thing, with the counting numbers. We have to somehow run out of counting numbers.

    It’s not obvious that we can do that. Some promising approaches don’t work. For example, the set of all the integers — 1, 2, 3, 4, 5, and all that, and 0, and the negative numbers -1, -2, -3, -4, -5, and so on — is still countable. Match the counting number 1 to 0. Match the counting number 2 to 1. Match the counting number 3 to -1. Match 4 to 2. Match 5 to -2. Match 6 to 3. Match 7 to -3. And so on.

    Even ordered pair of the counting numbers don’t do it. We can match the counting number 1 to the pair (1, 1). Match the counting number 2 to the pair (2, 1). Match the counting number 3 to (1, 2). Match 4 to (3, 1). Match 5 to (2, 2). Match 6 to (1, 3). Match 7 to (4, 1). Match 8 to (3, 2). And so on. We can achieve similar staggering results with ordered triplets, quadruplets, and more. Ordered pairs of integers, positive and negative? Longer to do, yes, but just as doable.

    So are there any uncountable things?

    Sure. Wouldn’t be here if there weren’t. For example: think about the set that’s all the ways to pick things from a set. I sense your confusion. Let me give you an example. Suppose we have the set of three things. They’re the numbers 1, 2, and 3. We can make a bunch of sets out of things from this set. We can make the set that just has ‘1’ in it. We can make the set that just has ‘2’ in it. Or the set that just has ‘3’ in it. We can also make the set that has just ‘1’ and ‘2’ in it. Or the set that just has ‘2’ and 3′ in it. Or the set that just has ‘3’ and ‘1’ in it. Or the set that has all of ‘1’, ‘2’, and ‘3’ in it. And we can make the set that hasn’t got any of these in it. (Yes, that does too count as a set.)

    So from a set of three things, we were able to make a collection of eight sets. If we had a set of four things, we’d be able to make a collection of sixteen sets. With five things to start from, we’d be able to make a collection of thirty-two sets. This collection of sets we call the “power set” of our original set, and if there’s one thing we can say about it, it’s that it’s bigger than the set we start from.

    The power set for a finite set, well, that’ll be much bigger. But it’ll still be finite. Still be countable. What about the power set for an infinitely large set?

    And the power set of the counting numbers, the collection of all the ways you can make a set of counting numbers, is really big. Is it uncountably big?

    Let’s step back. Remember when I said mathematicians don’t get “much more” sophisticated than matching up things to the counting numbers? Here’s a little bit of that sophistication. We don’t have to match stuff up to counting numbers if we like. We can match the things in one set to the things in another set. If it’s possible to match them up one-to-one, with nothing missing in either set, then the two sets have to be the same size. The same cardinality, in the jargon.

    So. The set of the numbers 1, 2, 3, has to have a smaller cardinality than its power set. Want to prove it? Do this exactly the way you imagine. You run out of things in the original set before you run out of things in the power set, so there’s no making a one-to-one matchup between the two.

    With the infinitely large yet countable set of the counting numbers … well, the same result holds. It’s harder to prove. You have to show that there’s no possible way to match the infinitely many things in the counting numbers to the infinitely many things in the power set of the counting numbers. (The easiest way to do this is by contradiction. Imagine that you have made such a matchup, pairing everything in your power set to everything in the counting numbers. Then you go through your matchup and put together a collection that isn’t accounted for. Whoops! So you must not have matched everything up in the first place. Why not? Because you can’t.)

    But the result holds. The power set of the counting numbers is some other set. It’s infinitely large, yes. And it’s so infinitely large that it’s somehow bigger than the counting numbers. It is uncountable.

    There’s more than one uncountably large set. Of course there are. We even know of some of them. For example, there’s the set of real numbers. Three-quarters of my readers have been sitting anxiously for the past eight paragraphs wondering if I’d ever get to them. There’s good reason for that. Everybody feels like they know what the real numbers are. And the proof that the real numbers are a larger set than the counting numbers is easy to understand. An eight-year-old could master it. You can find that proof well-explained within the first ten posts of pretty much every mathematics blog other than this one. (I was saving the subject. Then I finally decided I couldn’t explain it any better than everyone else has done.)

    Are the real numbers the same size, the same cardinality, as the power set of the counting numbers?

    Sure, they are.

    No, they’re not.

    Whichever you like. This is one of the many surprising mathematical results of the surprising 20th century. Starting from the common set of axioms about set theory, it’s undecidable whether the set of real numbers is as big as the power set of the counting numbers. You can assume that it is. This is known as the Continuum Hypothesis. And you can do fine mathematical work with it. You can assume that it is not. This is known as the … uh … Rejecting the Continuum Hypothesis. And you can do fine mathematical work with that. What’s right depends on what work you want to do. Either is consistent with the starting hypothesis. You are free to choose either, or if you like, neither.

    My understanding is that most set theory finds it more productive to suppose that they’re not the same size. I don’t know why this is. I know enough set theory to lead you to this point, but not past it.

    But that the question can exist tells you something fascinating. You can take the power set of the power set of the counting numbers. And this gives you another, even vaster, uncountably large set. As enormous as the collection of all the ways to pick things out of the counting numbers is, this power set of the power set is even vaster.

    We’re not done. There’s the power set of the power set of the power set of the counting numbers. And the power set of that. Much as geology teaches us to see Deep Time, and astronomy Deep Space, so power sets teach us to see Deep … something. Deep Infinity, perhaps.

  • Joseph Nebus 3:00 pm on Friday, 8 April, 2016 Permalink | Reply
    Tags: , , , infinity, , , , ,   

    A Leap Day 2016 Mathematics A To Z: Riemann Sphere 

    To my surprise nobody requested any terms beginning with `R’ for this A To Z. So I take this free day to pick on a concept I’d imagine nobody saw coming.

    Riemann Sphere.

    We need to start with the complex plane. This is just, well, a plane. All the points on the plane correspond to a complex-valued number. That’s a real number plus a real number times i. And i is one of those numbers which, squared, equals -1. It’s like the real number line, only in two directions at once.

    Take that plane. Now put a sphere on it. The sphere has radius one-half. And it sits on top of the plane. Its lowest point, the south pole, sits on the origin. That’s whatever point corresponds to the number 0 + 0i, or as humans know it, “zero”.

    We’re going to do something amazing with this. We’re going to make a projection, something that maps every point on the sphere to every point on the plane, and vice-versa. In other words, we can match every complex-valued number to one point on the sphere. And every point on the sphere to one complex-valued number. Here’s how.

    Imagine sitting at the north pole. And imagine that you can see through the sphere. Pick any point on the plane. Look directly at it. Shine a laser beam, if that helps you pick the point out. The laser beam is going to go into the sphere — you’re squatting down to better look through the sphere — and come out somewhere on the sphere, before going on to the point in the plane. The point where the laser beam emerges? That’s the mapping of the point on the plane to the sphere.

    There’s one point with an obvious match. The south pole is going to match zero. They touch, after all. Other points … it’s less obvious. But some are easy enough to work out. The equator of the sphere, for instance, is going to match all the points a distance of 1 from the origin. So it’ll have the point matching the number 1 on it. It’ll also have the point matching the number -1, and the point matching i, and the point matching -i. And some other numbers.

    All the numbers that are less than 1 from the origin, in fact, will have matches somewhere in the southern hemisphere. If you don’t see why that is, draw some sketches and think about it. You’ll convince yourself. If you write down what convinced you and sprinkle the word “continuity” in here and there, you’ll convince a mathematician. (WARNING! Don’t actually try getting through your Intro to Complex Analysis class doing this. But this is what you’ll be doing.)

    What about the numbers more than 1 from the origin? … Well, they all match to points on the northern hemisphere. And tell me that doesn’t stagger you. It’s one thing to match the southern hemisphere to all the points in a circle of radius 1 away from the origin. But we can match everything outside that little circle to the northern hemisphere. And it all fits in!

    Not amazed enough? How about this: draw a circle on the plane. Then look at the points on the Riemann sphere that match it. That set of points? It’s also a circle. A line on the plane? That’s also a line on the sphere. (Well, it’s a geodesic. It’s the thing that looks like a line, on spheres.)

    How about this? Take a pair of intersecting lines or circles in the plane. Look at what they map to. That mapping, squashed as it might be to the northern hemisphere of the sphere? The projection of the lines or circles will intersect at the same angles as the original. As much as space gets stretched out (near the south pole) or squashed down (near the north pole), angles stay intact.

    OK, but besides being stunning, what good is all this?

    Well, one is that it’s a good thing to learn on. Geometry gets interested in things that look, at least in places, like planes, but aren’t necessarily. These spheres are, and the way a sphere matches a plane is obvious. We can learn the tools for geometry on the Möbius strip or the Klein bottle or other exotic creations by the tools we prove out on this.

    And then physics comes in, being all weird. Much of quantum mechanics makes sense if you imagine it as things on the sphere. (I admit I don’t know exactly how. I went to grad school in mathematics, not in physics, and I didn’t get to the physics side of mathematics much at that time.) The strange ways distance can get mushed up or stretched out have echoes in relativity. They’ll continue having these echoes in other efforts to explain physics as geometry, the way that string theory will.

    Also important is that the sphere has a top, the north pole. That point matches … well, what? It’s got to be something infinitely far away from the origin. And this make sense. We can use this projection to make a logically coherent, sensible description of things “approaching infinity”, the way we want to when we first learn about infinitely big things. Wrapping all the complex-valued numbers to this ball makes the vast manageable.

    It’s also good numerical practice. Computer simulations have problems with infinitely large things, for the obvious reason. We have a couple of tools to handle this. One is to model a really big but not infinitely large space and hope we aren’t breaking anything. One is to create a “tiling”, making the space we are able to simulate repeat itself in a perfect grid forever and ever. But recasting the problem from the infinitely large plane onto the sphere can also work. This requires some ingenuity, to be sure we do the recasting correctly, but that’s all right. If we need to run a simulation over all of space, we can often get away with doing a simulation on a sphere. And isn’t that also grand?

    The Riemann named here is Bernhard Riemann, yet another of those absurdly prolific 19th century mathematicians, especially considering how young he was when he died. His name is all over the fundamentals of analysis and geometry. When you take Introduction to Calculus you get introduced pretty quickly to the Riemann Sum, which is how we first learn how to calculate integrals. It’s that guy. General relativity, and much of modern physics, is based on advanced geometries that again fall back on principles Riemann noticed or set out or described so well that we still think of them as he discovered.

    • elkement (Elke Stangl) 7:16 am on Sunday, 10 April, 2016 Permalink | Reply

      Of course I jump onto the ‘quantum physics on a sphere’ stuff. Again, I can’t resist posting a link but I am pretty sure you will like it: Scott Aaronson explains what quantum physics actually is – from a math / computer science perspective, why complex numbers are used etc. – in the most intriguing and concisest way I have ever seen:


      Quote: “Why did God go with the complex numbers and not the real numbers?
      Years ago, at Berkeley, I was hanging out with some math grad students — I fell in with the wrong crowd — and I asked them that exact question. The mathematicians just snickered. “Give us a break — the complex numbers are algebraically closed!” To them it wasn’t a mystery at all.”


  • Joseph Nebus 3:00 pm on Thursday, 7 April, 2016 Permalink | Reply
    Tags: age, asymptotes, , infinity,   

    Reading the Comics, April 4, 2016: Precursor To April 5 Edition 

    Comic Strip Master Command followed up its slow times with a rush of comic strips I can talk about. Or that I can sort-of talk about. There’s enough for a regular essay just about the comics from the 5th of April alone. So today’s Reading the Comics entry is just the strips up through the 4th of April. That makes for a slightly short collection but what can I do besides schedule these for a consistent day of the week regardless of how many comics there are to talk about?

    Dave Whamond’s Reality Check for the 3rd of April mentions the infinite-monkeys tale. And it even does so in iconic form, in talking about writing Shakespeare’s Hamlet. I don’t mean to disparage the comic, especially when it’s put five punch lines into the panel. (I admit I’m a little disappointed when a Sunday strip is the same one- or three-panel format as a regular daily comic, though.) But I’m pretty sure this same premise was done by Fred Allen on the radio sometime around 1940. I don’t think that mentioned the infinite monkeys, though.

    Missy Meyer’s Holiday Doodles for the 4th of April mentioned that it was Square Root Day. I am curious whether the comic will mention anything for the 9th of April. I have noticed some people muttering about this Perfect Squares Day. Also I’m surprised that “glases with tape over the bridge” is still a signifier of square-ness.

    Brandon Sheffield and Dami Lee’s Hot Comics for Cool People for the 4th titles its installment Perfect Geometry Comics. And it presents, as often will happen, some muddle of algebra and geometry as the way to work out a brilliantly perfect solution. Also, the comic features a dog in safety goggles, which is always good to see.

    Graham Nolan’s Sunshine State for the 4th presents a word problem that might be a good introduction to asymptotes. The ratio of two people’s ages will approach without ever quite equalling 1. But it will, if the people last long enough, come as close as one might want. There’s probably also a good lesson to be made by comparing this age problem to the problem of Achilles and the tortoise.

  • Joseph Nebus 3:00 pm on Saturday, 26 March, 2016 Permalink | Reply
    Tags: , chocolate, , , gratitude, infinity, Lansing, Michigan State University   

    Things To Be Thankful For 

    A couple buildings around town have blackboard paint and a writing prompt on the walls. Here’s one my love and I wandered across the other day while going to Fabiano’s Chocolate for the obvious reason. (The reason was to see their novelty three-foot-tall, 75-pound solid chocolate bunny. Also to buy less huge piles of candy.)

    Written in chalk to the prompt 'I'm Grateful For': 'C Weierstrass' and 'G Cantor', as well as 'MSU B-Ball'.

    I do not know who in this context J D McCarthy is.

    I recognized that mathematics majors had been past. Well, anyone with an interest in popular mathematics might have written they’re grateful for “G. Cantor”. His work’s escaped into the popular imagination, at least a bit. “C. Weirstrauβ”, though, that’s a mathematics major at work.

    Karl Weierstrass, the way his name’s rendered in the English-language mathematics books I know, was one of the people who made analysis what it is today. Analysis is, at heart, the study of why calculus works. He attacked the foundations of calculus, which by modern standards weren’t quite rigorous. And he did brilliantly, giving us the modern standards of rigor. He’s terrified generations of mathematics majors by defining what it is for a function to be continuous. Roughly, it means we can draw the graph of a function without having to lift a pencil. He put it in a non-rough manner. He also developed the precise modern idea for what a limit is. Roughly, a limit is exactly what you might think it means; but to be precise takes genius.

    Among Weierstrass’s students was Georg Cantor. His is a more familiar name. He proved that just because a set has infinitely many elements in it doesn’t mean that it can’t be quite small compared to other infinitely large sets. His Diagonal Argument shows there must be, in a sense, more real numbers than there are counting numbers. And a child can understand it. Cantor also pioneered the modern idea of set theory. For a while this looked like it might be the best way to understand why arithmetic works like it does. (My understanding is it’s now thought category theory more fundamental. But I don’t know category theory well enough to have an informed opinion.)

    The person grateful to Michigan State University basketball I assume wrote that before last Sunday, when the school wrecked so many NCAA tournament brackets.

  • Joseph Nebus 3:00 pm on Friday, 11 March, 2016 Permalink | Reply
    Tags: , , infinity, , ,   

    A Leap Day 2016 Mathematics A To Z: Fractions (Continued) 

    Another request! I was asked to write about continued fractions for the Leap Day 2016 A To Z. The request came from Keilah, of the Knot Theorist blog. But I’d already had a c-word request in (conjecture). So you see my elegant workaround to talk about continued fractions anyway.

    Fractions (continued).

    There are fashions in mathematics. There are fashions in all human endeavors. But mathematics almost begs people to forget that it is a human endeavor. Sometimes a field of mathematics will be popular a while and then fade. Some fade almost to oblivion. Continued fractions are one of them.

    A continued fraction comes from a simple enough starting point. Start with a whole number. Add a fraction to it. 1 + \frac{2}{3}. Everyone knows what that is. But then look at the denominator. In this case, that’s the ‘3’. Why couldn’t that be a sum, instead? No reason. Imagine then the number 1 + \frac{2}{3 + 4}. Is there a reason that we couldn’t, instead of the ‘4’ there, have a fraction instead? No reason beyond our own timidity. Let’s be courageous. Does 1 + \frac{2}{3 + \frac{4}{5}} even mean anything?

    Well, sure. It’s getting a little hard to read, but 3 + \frac{4}{5} is a fine enough number. It’s 3.8. \frac{2}{3.8} is a less friendly number, but it’s a number anyway. It’s a little over 0.526. (It takes a fair number of digits past the decimal before it ends, but trust me, it does.) And we can add 1 to that easily. So 1 + \frac{2}{3 + \frac{4}{5}} means a number a slight bit more than 1.526.

    Dare we replace the “5” in that expression with a sum? Better, with the sum of a whole number and a fraction? If we don’t fear being audacious, yes. Could we replace the denominator of that with another sum? Yes. Can we keep doing this forever, creating this never-ending stack of whole numbers plus fractions? … If we want an irrational number, anyway. If we want a rational number, this stack will eventually end. But suppose we feel like creating an infinitely long stack of continued fractions. Can we do it? Why not? Who dares, wins!

    OK. Wins what, exactly?

    Well … um. Continued fractions certainly had a fashionable time. John Wallis, the 17th century mathematician famous for introducing the ∞ symbol, and for an interminable quarrel with Thomas Hobbes over Hobbes’s attempts to reform mathematics, did much to establish continuous fractions as a field of study. (He’s credited with inventing the field. But all claims to inventing something big are misleading. Real things are complicated and go back farther than people realize, and inventions are more ambiguous than people think.) The astronomer Christiaan Huygens showed how to use continued fractions to design better gear ratios. This may strike you as the dullest application of mathematics ever. Let it. It’s also important stuff. People who need to scale one movement to another need this.

    In the 18th and 19th century continued fractions became interesting for higher mathematics. Continued fractions were the approach Leonhard Euler used to prove that e had to be irrational. That’s one of the superstar numbers of mathematics. Johan Heinrich Lambert used this to show that if θ is a rational number (other than zero) then the tangent of θ must be irrational. This is one path to showing that π must be irrational. Many of the astounding theorems of Srinivasa Ramanujan were about continued fractions, or ideas which built on continued fractions.

    But since the early 20th century the field’s evaporated. I don’t have a good answer why. The best speculation I’ve heard is that the field seems to fit poorly into any particular topic. Continued fractions get interesting when you have an infinitely long stack of nesting denominators. You don’t want to work with infinitely long strings of things before you’ve studied calculus. You have to be comfortable with these things. But that means students don’t encounter it until college, at least. And at that point fractions seem beneath the grade level. There’s a handful of proofs best done by them. But those proofs can be shown as odd, novel approaches to these particular problems. Studying the whole field is hardly needed.

    So, perhaps because it seems like an odd fit, the subject’s dried up and blown away. Even enthusiasts seem to be resigned to its oblivion. Professor Adam Van Tyul, then at Queens University in Kingston, Ontario, composed a nice set of introductory pages about continued fractions. But the page is defunct. Dr Ron Knott has a more thorough page, though, and one with calculators that work well.

    Will continued fractions make a comeback? Maybe. It might take the discovery of some interesting new results, or some better visualization tools, to reignite interest. Chaos theory, the study of deterministic yet unpredictable systems, first grew (we now recognize) in the 1890s. But it fell into obscurity. When we got some new theoretical papers and the ability to do computer simulations, it flowered again. For a time it looked ready to take over all mathematics, although we’ve got things under better control now. Could continued fractions do the same? I’m skeptical, but won’t rule it out.

    Postscript: something you notice quickly with continued fractions is they’re a pain to typeset. We’re all right with 1 + \frac{2}{3 + \frac{4}{5}} . But after that the LaTeX engine that WordPress uses to render mathematical symbols is doomed. A real LaTeX engine gets another couple nested denominators in before the situation is hopeless. If you’re writing this out on paper, the way people did in the 19th century, that’s all right. But there’s no typing it out that way.

    But notation is made for us, not us for notation. If we want to write a continued fraction in which the numerators are all 1, we have a brackets shorthand available. In this we would write 2 + \frac{1}{3 + \frac{1}{4 + \cdots }} as [2; 3, 4, … ]. The numbers are the whole numbers added to the next level of fractions. Another option, and one that lends itself to having numerators which aren’t 1, is to write out a string of fractions. In this we’d write 2 + \frac{1}{3 +} \frac{1}{4 +} \frac{1}{\cdots + }. We have to trust people notice the + sign is in the denominator there. But if people know we’re doing continued fractions then they know to look for the peculiar notation.

    • gaurish 5:09 pm on Friday, 11 March, 2016 Permalink | Reply

      I disagree! Research in field of Continued Fractions never died, so no question of comeback. See following two books:

      (1).Continued Fractions by Aleksandr Yakovlevich Khinchin (1964)
      (2).Neverending Fractions: An Introduction to Continued Fractions by Jonathan Borwein, ‎Alf van der Poorten, ‎Jeffrey Shallit (2014)


      • Joseph Nebus 3:32 am on Monday, 14 March, 2016 Permalink | Reply

        I may be overstating things to say the field’s died. But I don’t remember it ever coming up in my own education, and I can’t — on a quick survey — find evidence of the subject being taught regularly at any of the colleges or universities I’ve had much to do with. It’s mentioned as one of the subjects for a special topics course offered every other year at Michigan State University. But that’s also at the end of the roster, where they usually list the things they’ll get to if there’s time, which there never is.

        And I know these aren’t the only books about continued fractions published recently, but 1964 isn’t all that recent. I am sure good people are finding interesting new results. But the field isn’t thriving the way, say, Monte Carlo methods, or wavelets, or KAM theory are.


        • gaurish 6:12 am on Monday, 14 March, 2016 Permalink | Reply

          Today I just skimmed through a paper on Continued Fractions published in Acta Arithmetica in September 2015. (https://goo.gl/CtXops) It’s recent I guess :-)

          Also, if you haven’t read the 1964 book I suggested in previous comment then you know nothing about continued fractions.

          You probably never dived deep into Number Theory, as I never dived deep into Differential Equations so I don’t know that KAM theory is an active field of research!

          At my Institute (in India), continued fractions are taught in 3rd semester and with decent details. In 2014, a paper on an unsolved problem in continued fractions (Zaremba’s Conjecture) appeared in Annals of Mathematics (http://annals.math.princeton.edu/2014/180-1/p03 )…..

          My whole point was: “If you don’t know something, it doesn’t mean that it doesn’t exist”.


          • Joseph Nebus 7:20 am on Wednesday, 16 March, 2016 Permalink | Reply

            I am happy to take correction. At least, I want to be happy to take correction. You’re right that I don’t know all that’s going on in mathematics — it’s remarkable I know anything that’s going on in mathematics — and I’d be a fool to say courses teaching the subject aren’t there. Thank you for letting me know there’s more in the field than I suspected.


    • KnotTheorist 8:21 pm on Friday, 11 March, 2016 Permalink | Reply

      Thanks for the informative post! I love reading about mathematical history.


  • Joseph Nebus 12:44 am on Monday, 8 February, 2016 Permalink | Reply
    Tags: infinity, , , , Super Bowl   

    Reading the Comics, February 6, 2016: Lottery Edition 

    As mentioned, the lottery was a big thing a couple of weeks ago. So there were a couple of lottery-themed comics recently. Let me group them together. Comic strips tend to be anti-lottery. It’s as though people trying to make a living drawing comics for newspapers are skeptical of wild long-shot dreams.

    T Lewis and Michael Fry’s Over The Hedge started a lottery storyline the 1st of February. Verne, the turtle, repeats the tired joke that the lottery is a tax on people bad at mathematics. Enormous jackpots, like the $1,500,000,000 payout of a couple weeks back, break one leg of the anti-lottery argument. If the expected payout is large enough then the expectation value of playing can become positive. The expectation value is one of those statistics terms that almost tells you what it is just by the name. It’s what you would expect as the average result if you could repeat some experiment arbitrarily many times. If the payout is 1.5 billion, and the chance of winning one in 250 million, then the expected value of the payout is six dollars. If a ticket costs less than six dollars, then — if you could play over and over, hundreds of millions of times — you’d expect to come out ahead each time you play.

    If you could. Of course, you can’t play the lottery hundreds of millions of times. You can play a couple of times at most. (Even if you join a pool at work and buy, oh, a thousand tickets. That’s still barely better than playing twice.) And the payout may be less than the full jackpot; multiple winners are common things in the most enormous jackpots. Still, if you’re pondering whether it’s sensible to spend two dollars on a billion-dollar lottery jackpot? You’re being fussy. You’ll spend at least that much on something more foolish and transitory — the lottery ticket can at least be used as a bookmark — I’ll bet.

    Jef Mallett’s Frazz for the 4th of February picks up the anti-lottery crusade. Caulfield does pin down that lotteries work because people figure they have a better chance of winning than they truly do. Nobody buys a ticket because they figure it’s worth losing a dollar or two. It’s because they figure the chance is worth a little money.

    Ken Cursoe’s Tiny Sepuku for the 4th of February consults the Chinese Zodiac Monkey for help on finding lucky numbers. There’s not really any finding them. Lotteries work hard to keep the winning numbers as unpredictable as possible. I have heard the lore that numbers up to 31 are picked by more people — they’re numbers that can be birthdays — so that multiple winners on the same drawing are more likely. I don’t know that this is true, though. I suspect that I could feel comfortable even with a four-way split of one and a half billions of dollars. Five-way would be out of the question, of course. Better to tear up the ticket than take that undignified split.

    Ahead of the exam, Ruthie asks, 'Instead of two number 2 pencils, can we bring one number 3 pencil and one number 1? Or one number 4 pencil or four number 1 pencils? And will there be any math on this test? I'm not good at math.'

    In Rick Detorie’s One Big Happy for the 3rd of February, 2016. The link will probably expire in early March.

    In Rick Detorie’s One Big Happy for the 3rd of February features Ruthie tossing off a confusing pile of numbers on the way to declaring herself bad at mathematics. It’s always the way.

    Breaking up a whole number like 4 into different sums of whole numbers is a mathematics problem also. Splitting up 4 into, say, ‘2 plus 1 plus 1’, is a ‘partition’ of the number. I’m not sure of important results that follow this sort of integer partition directly. But splitting up sets of things different ways runs through a lot of mathematics. Integer partitions are the ones you can do in elementary school.

    Percy Crosby’s Skippy for the 3rd of February — I believe it originally ran December 1928 — is a Roman numerals joke. The mathematical content may be low, but what the heck. It’s kind of timely. The Super Bowl, set for today, has been the most prominent use of Roman numerals we have anymore since the Star Trek movies stopped using them a quarter-century ago.

    Bill Amend’s FoxTrot for the 7th of February seems to be in agreement. And yes, I’m disappointed the Super Bowl is giving up on Roman numerals, much the way I’m disappointed they’re using a standardized and quite boring logo for each year. Part of the glory of past Super Bowls is seeing old graphic design eras preserved like fossils.

    Brian Gordon’s Fowl Language for the 5th of February shows a duck trying to explain incredibly huge numbers to his kid. It’s hard. You need to appreciate mathematics some to start appreciating real vastness. I’m not sure anyone can really have a feel for a number like 300 sextillion, the character’s estimate for the number of stars there are. You can make rationalizations for what numbers that big are like, but I suspect the mind shies back from staring directly at it.

    Infinity, and the many different sizes of infinity, might be easier to work with. One doesn’t need to imagine infinitely many things to work out the properties of infinitely large sets. You could do as well with a neatly drawn rectangle and some other, bigger, rectangles. But if you want to talk about the number 300,000,000,000,000,000,000,000 then you do want to think of something true about that number which isn’t also true about eight or about nine hundred million. But geology teaches us to ponder Deep Time. Astronomy trains us to imagine incredibly vast distances. Why not spend some time pondering huge numbers?

    And with all that said, I’d like to make one more call for any requests for my winter 2016 Mathematics A To Z glossary. There are quite a few attractive letters left unclaimed; a word or short term could be yours!

  • Joseph Nebus 4:00 pm on Friday, 25 December, 2015 Permalink | Reply
    Tags: Christmas trees, forecasting, infinity, , , Platonic Ideals,   

    Reading the Comics, December 23, 2015: Richard Thompson Christmas Trees Edition 

    Richard Thompson’s Cul de Sac for the 19th of December (a rerun, alas, from the 18th of December, 2010) gives me a name for this Reading the Comics installment. Just as in a Richard’s Poor Almanac mentioned last time he gives us a Christmas tree occupying a non-Euclidean space. Non-Euclidean spaces do open up the possibility of many wondrous and counterintuitive phenomena. Trees probably aren’t among them, but I don’t know a better shorthand way to describe their mysteries. And if you’re not sure why so many people say this was the greatest comic strip of our still-young century, look at little Pete in the last panel. Both his expression and the composition of the panel are magnificent.

    Tom Toles’s Randolph Itch, 2 am for the 21st of December is a rerun. And it’s one that’s been mentioned around here as recently as August. I don’t care. It’s still a good funny slapstick joke. The kicker at the bottom is also a solid giggle.

    Richard Thompson’s Poor Richard’s Almanac for the 21st of December justifies my theme with its Platonic Fir. The Platonic Ideals of objects are, properly speaking, philosophical constructs. If they are constructs, anyway, and not the things that truly exist, and yes, we must be careful what we mean by ‘exist’ in this context. But Thompson’s diagram shows this Platonic Fir drawn as a mathematical diagram. That’s another common motif. Mathematical constructs, ideas like “triangles” and “circles” and “rotations”, do suggest Platonic Ideals quite closely. We might be a bit pressed to say what the quintessence of chair-ness is, the thing all chairs must be aspects of. But we can be pretty sure we understand what a triangle is, apart from our messy and imperfect real-world approximations of a true triangle. When mathematics enthusiasts speak of the beauty of pure mathematics it does seem like they speak of the beauty of approaching Platonic Ideals.

    John Graziano’s Ripley’s Believe It or Not for the 21st of December continues its Rubik’s Cube obsession. Graziano spells Rubik correctly this time.

    Don Asmussen’s Bad Reporter panel for the 23rd of December does a joke that depends on the idea of getting to be “more than infinity”. Every kid has run into the problem of trying to understand “infinity plus one”. The way we speak of “infinity” we can’t really talk about getting “more than infinity”. But we are able to think meaningfully of ways to differentiate sizes of infinity. There are some infinitely large sets that, in a sensible way, are bigger than other infinitely large sets. That’s a fun field of mathematics. You can get to interesting questions in it without needing much background or experience. It’s almost ideal for pop-mathematics essays and if you don’t believe me, then look at how many results you get googling for “Cantor’s Diagonalization Argument”. It’s not an infinite number of results, but it’ll get you quite close.

    Brian and Ron Boychuk’s Chuckle Brothers for the 23rd of December is the anthropomorphic-numerals joke for this time around.

    Mark Litzler’s Joe Vanilla for the 23rd of December is built on the idea that it’s absurd to develop an algorithm that could predict earning potential, hairline at 50, and fidelity. It sounds silly at first glance. But if we’ve learned anything from sabermetrics it’s that all kinds of physical traits can be studied, and modeled, and predicted. With a large and reliable enough data set, and with a mindfully developed algorithm, these models can become quite good at predicting things. The underlying property is that on average, people are average. If we know what is typical, and we have reason to think that “typical” is not changing, then we can forecast the future pretty well based on what we already see. Or if we have reason to expect that “typical” is changing in ways we understand, we can still make good forecasts.

  • Joseph Nebus 3:00 pm on Monday, 5 October, 2015 Permalink | Reply
    Tags: , infinity, , simulations, zero point energy   

    Reading the Comics, October 1, 2015: Big Questions Edition 

    I’m cutting the collection of mathematically-themed comic strips at the transition between months. The set I have through the 1st of October is long enough already. That’s mostly because the first couple strips suggested some big topics at least somewhat mathematically-based came up. Those are fun to reason about, but take time to introduce. So let’s jump into them.

    Lincoln Pierce’s Big Nate: First Class for the 27th of September was originally published the 22nd of September, 1991. Nate and Francis trade off possession of the basketball, and a strikingly high number of successful shots in a row considering their age, in the infinitesimally sliced last second of the game. There’s a rather good Zeno’s-paradox-type-question to be made out of this. Suppose the game started with one second to go and Nate ahead by one point, since it is his strip. At one-half second to go, Francis makes a basket and takes a one point lead. At one-quarter second to go, Nate makes a basket and takes a one point lead. At one-eighth of a second to go, Francis repeats the basket; at one-sixteenth of a second, Nate does. And so on. Suppose they always make their shots, and suppose that they are able to make shots without requiring any more than half the remaining time available. Who wins, and why?

    Tim Rickard’s Brewster Rockit for the 27th of September is built on the question of whether the universe might be just a computer simulation, and if so, how we might tell. Being a computer simulation is one of those things that would seem to explain why mathematics tells us so much about the universe. One can make a probabilistic argument about this. Suppose there is one universe, and there are some number of simulations of the universe. Call that number N. If we don’t know whether we’re in the real or the simulated universe, then it would seem we have an estimated probability of being in the real universe of one divided by N plus 1. The chance of being in the real universe starts out none too great and gets dismally small pretty fast.

    But this does put us in philosophical difficulties. If we are in something that is a complete, logically consistent universe that cannot be escaped, how is it not “the real” universe? And if “the real” universe is accessible from within “the simulation” then how can they be separate? The question is hard to answer and it’s far outside my realm of competence anyway.

    Mark Leiknes’s Cow and Boy Classics for the 27th of September originally ran the 15th of September, 2008. And it talks about the ideas of zero-point energy and a false vacuum. This is about something that seems core to cosmology: how much energy is there in a vacuum? That is, if there’s nothing in a space, how much energy is in it? Quantum mechanics tells us it isn’t zero, in part because matter and antimatter flutter into and out of existence all the time. And there’s gravity, which is hard to explain quite perfectly. Mathematical models of quantum mechanics, and gravity, make various predictions about how much the energy of the vacuum should be. Right now, the models don’t give us really good answers.

    Some suggest that there might be more energy in the vacuum than we could ever use, and that if there were some way to draw it off — well, there’d never be a limit to anything ever again. I think this an overly optimistic projection. The opposite side of this suggests that if it is possible to draw energy out of the vacuum, that means it must be possible to shift empty space from its current state to a lower-energy state, much the way you can get energy out of a pile of rocks by making the rocks fall. But the lower-energy vacuum might have different physics in ways that make it very hard for us to live, or for us to exist. I think this an overly pessimistic projection. But I am not an expert in the fields, which include cosmology, quantum mechanics, and certain rather difficult tinkerings with the infinitely many.

    Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. for the 28th of September is a joke in the form of true, but useless, word problem answers. Well, putting down a lower bound on what the answer is can help. If you knew what three times twelve was, you could get to four times twelve reliably, and that’s a help. But if you’re lost for three times twelve then you’re just stalling for time and the teacher knows it.

    Paul Gilligan’s Pooch Cafe for the 28th of September uses the monkeys-on-keyboards concept. It’s shifted here to cats on a keyboard, but the principle is the same. Give a random process enough time and you can expect it to produce anything you want. It’s a matter of how long you can wait, though. And all the complications of how to make something that’s random. Cats won’t do it.

    Mel Henze’s Gentle Creatures for the 29th of September is a rerun. I’m not sure when it was first printed. But it does use “ability to do mathematics” as a shorthand for “is intelligent at all”. That’s flattering to put in front of a mathematician, but I don’t think that’s really fair.

    Paul Trap’s Thatababy for the 30th of September is a protest about using mathematics in real life. I’m surprised Thatababy’s Dad had an algebra teacher proclaiming differential equations would be used. Usually teachers assert that whatever they’re teaching will be useful, which is how we provide motivation.

  • Joseph Nebus 3:21 pm on Monday, 22 June, 2015 Permalink | Reply
    Tags: , , , Cantor sets, , infinity, measure, , spackle   

    A Summer 2015 Mathematics A To Z: measure 


    Before painting a room you should spackle the walls. This fills up small holes and cracks. My father is notorious for using enough spackle to appreciably diminish the room’s volume. (So says my mother. My father disagrees.) I put spackle on as if I were paying for it myself, using so little my father has sometimes asked when I’m going to put any on. I’ll get to mathematics in the next paragraph.

    One of the natural things to wonder about a set — a collection of things — is how big it is. The “measure” of a set is how we describe how big a set is. If we’re looking at a set that’s a line segment within a longer line, the measure pretty much matches our idea of length. If we’re looking at a shape on the plane, the measure matches our idea of area. A solid in space we expect has a measure that’s like the volume.

    We might say the cracks and holes in a wall are as big as the amount of spackle it takes to fill them. Specifically, we mean it’s the least bit of spackle needed to fill them. And similarly we describe the measure of a set in terms of how much it takes to cover it. We even call this “covering”.

    We use the tool of “cover sets”. These are sets with a measure — a length, a volume, a hypervolume, whatever — that we know. If we look at regular old normal space, these cover sets are typically circles or spheres or similar nice, round sets. They’re familiar. They’re easy to work with. We don’t have to worry about how to orient them, the way we might if we had square or triangular covering sets. These covering sets can be as small or as large as you need. And we suppose that we have some standard reference. This is a covering set with measure 1, this with measure 1/2, this with measure 24, this with measure 1/72.04, and so on. (If you want to know what units these measures are in, they’re “units of measure”. What we’re interested in is unchanged whether we measure in “inches” or “square kilometers” or “cubic parsecs” or something else. It’s just longer to say.)

    You can imagine this as a game. I give you a set; you try to cover it. You can cover it with circles (or spheres, or whatever fits the space we’re in) that are big, or small, or whatever size you like. You can use as many as you like. You can cover more than just the things in the set I gave you. The only absolute rule is you must not miss anything, even one point, in the set I give you. Find the smallest total area of the covering circles you use. That smallest total area that covers the whole set is the measure of that set.

    Generally, measure matches pretty well the intuitive feel we might have for length or area or volume. And the idea extends to things that don’t really have areas. For example, we can study the probability of events by thinking of the space of all possible outcomes of an experiment, like all the ways twenty coins might come up. We find the measure of the set of outcomes we’re interested in, like all the sets that have ten tails. The probability of the outcome we’re interested in is the measure of the set we’re interested in divided by the measure of the set of all possible outcomes. (There’s more work to do to make this quite true. In an advanced probability course we do this work. Please trust me that we could do it if we had to. Also you see why we stride briskly past the discussion of units. What unit would make sense for measuring “the space of all possible outcomes of an experiment” anyway?)

    But there are surprises. For example, there’s the Cantor set. The easiest way to make the Cantor set is to start with a line of length 1 — of measure 1 — and take out the middle third. This produces two line segments of length, measure, 1/3 each. Take out the middle third of each of those segments. This leaves four segments each of length 1/9. Take out the middle third of each of those four segments, producing eight segments, and so on. If you do this infinitely many times you’ll create a set that has no measure; it fills no volume, it has no length. And yet you can prove there are just as many points in this set as there are in a real normal space. Somehow merely having a lot of points doesn’t mean they fill space.

    Measure is useful not just because it can give us paradoxes like that. We often want to say how big sets, or subsets, of whatever we’re interested in are. And using measure lets us adapt things like calculus to become more powerful. We’re able to say what the integral is for functions that are much more discontinuous, more chopped up, than ones that high school or freshman calculus can treat, for example. The idea of measure takes length and area and such and makes it more abstract, giving it great power and applicability.

    • sarcasticgoat 3:31 pm on Monday, 22 June, 2015 Permalink | Reply

      Living with black mold on walls, thoughts?


      • Joseph Nebus 4:41 pm on Monday, 22 June, 2015 Permalink | Reply

        See if you can get any rent from it. Unfortunately the mold may be aware the only way to get rid of it is to burn the house down and move to another time zone, so it tends to make lowball offers. Make sure you have a flame source and an accelerant in view while negotiating.

        Liked by 1 person

    • howardat58 7:14 pm on Monday, 22 June, 2015 Permalink | Reply

      I love the Cantor middle third set. It really is mind blowing on first encounter.


      • John Friedrich 12:29 am on Tuesday, 23 June, 2015 Permalink | Reply

        Of further interest, the Hausdorff dimension of the Cantor set is ln2/ln3, which proves that it is a fractal.


        • Joseph Nebus 4:56 am on Tuesday, 23 June, 2015 Permalink | Reply

          Yeah, that’s a neat trait. I might get around to dimensions if I do another a-to-z run, or maybe as an independent discussion.


      • Joseph Nebus 4:55 am on Tuesday, 23 June, 2015 Permalink | Reply

        It is mind-blowing, and it’s one of those sets that just keeps giving strangeness.


        • Aquileana 2:08 pm on Wednesday, 24 June, 2015 Permalink | Reply

          5,280 is such an interesting number!… I appreciate that you share all about its twists and meanings with us… Also congratulations on your stats on Twitter, Joseph. All my best wishes. Aquileana :D


  • Joseph Nebus 4:08 pm on Tuesday, 7 April, 2015 Permalink | Reply
    Tags: , , cryptography, ENIAC, infinity, peace, , ,   

    Reading the Comics, April 6, 2015: Little Infinite Edition 

    As I warned, there were a lot of mathematically-themed comic strips the last week, and here I can at least get us through the start of April. This doesn’t include the strips that ran today, the 7th of April by my calendar, because I have to get some serious-looking men to look at my car and I just know they’re going to disapprove of what my CV joint covers look like, even though I’ve done nothing to them. But I won’t be reading most of today’s comic strips until after that’s done, and so commenting on them later.

    Mark Anderson’s Andertoons (April 3) makes its traditional appearance in my roundup, in this case with a business-type guy declaring infinity to be “the loophole of all loopholes!” I think that’s overstating things a fair bit, but strange and very counter-intuitive things do happen when you try to work out a problem in which infinities turn up. For example: in ordinary arithmetic, the order in which you add together a bunch of real numbers makes no difference. If you want to add together infinitely many real numbers, though, it is possible to have them add to different numbers depending on what order you add them in. Most unsettlingly, it’s possible to have infinitely many real numbers add up to literally any real number you like, depending on the order in which you add them. And then things get really weird.

    Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips (April 3) is the other strip in this roundup to at least name-drop infinity. I confess I don’t see how “being infinite” would help in bringing about world peace, but I suppose being finite hasn’t managed the trick just yet so we might want to think outside the box.

    (More …)

    • ivasallay 4:39 pm on Tuesday, 7 April, 2015 Permalink | Reply

      Andertoons and Birdbrains were the best for me.


    • abyssbrain 3:57 am on Wednesday, 8 April, 2015 Permalink | Reply

      Hilber’s infinite hotel rooms paradox can also show how weird the concept of infinity can get.


      • Joseph Nebus 10:08 pm on Wednesday, 8 April, 2015 Permalink | Reply

        They do, yes. They also suggest to me why the mathematics of infinity draws in a lot of … well, they’re generally called cranks, but maybe the less judgmental way to put it is non-standard mathematicians. The subject is astoundingly accessible; you can understand an interesting problem without any background. But the results are counter-intuitive, and so reasoning carefully is required, and it takes time and practice to do all the careful reasoning involved and to understand why the intuitive answers break down.

        Liked by 1 person

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc
%d bloggers like this: