Tagged: logic Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Sunday, 8 January, 2017 Permalink | Reply
    Tags: , , Birdbrains, Boomerangs, Elderberries, Grand Avenue, logic, Pot Shots, , Quincy   

    Reading the Comics, January 7, 2016: Just Before GoComics Breaks Everything Edition 


    Most of the comics I review here are printed on GoComics.com. Well, most of the comics I read online are from there. But even so I think they have more comic strips that mention mathematical themes. Anyway, they’re unleashing a complete web site redesign on Monday. I don’t know just what the final version will look like. I know that the beta versions included the incredibly useful, that is to say dumb, feature where if a particular comic you do read doesn’t have an update for the day — and many of them don’t, as they’re weekly or three-times-a-week or so — then it’ll show some other comic in its place. I mean, the idea of encouraging people to find new comics is a good one. To some extent that’s what I do here. But the beta made no distinction between “comic you don’t read because you never heard of Microcosm” and “comic you don’t read because glancing at it makes your eyes bleed”. And on an idiosyncratic note, I read a lot of comics. I don’t need to see Dude and Dude reruns in fourteen spots on my daily comics page, even if I didn’t mind it to start.

    Anyway. I am hoping, desperately hoping, that with the new site all my old links to comics are going to keep working. If they don’t then I suppose I’m just ruined. We’ll see. My suggestion is if you’re at all curious about the comics you read them today (Sunday) just to be safe.

    Ashleigh Brilliant’s Pot-Shots is a curious little strip I never knew of until GoComics picked it up a few years ago. Its format is compellingly simple: a little illustration alongside a wry, often despairing, caption. I love it, but I also understand why was the subject of endless queries to the Detroit Free Press (Or Whatever) about why was this thing taking up newspaper space. The strip rerun the 31st of December is a typical example of the strip and amuses me at least. And it uses arithmetic as the way to communicate reasoning, both good and bad. Brilliant’s joke does address something that logicians have to face, too. Whether an argument is logically valid depends entirely on its structure. If the form is correct the reasoning may be excellent. But to be sound an argument has to be correct and must also have its assumptions be true. We can separate whether an argument is right from whether it could ever possibly be right. If you don’t see the value in that, you have never participated in an online debate about where James T Kirk was born and whether Spock was the first Vulcan in Star Fleet.

    Thom Bluemel’s Birdbrains for the 2nd of January, 2017, is a loaded-dice joke. Is this truly mathematics? Statistics, at least? Close enough for the start of the year, I suppose. Working out whether a die is loaded is one of the things any gambler would like to know, and that mathematicians might be called upon to identify or exploit. (I had a grandmother unshakably convinced that I would have some natural ability to beat the Atlantic City casinos if she could only sneak the underaged me in. I doubt I could do anything of value there besides see the stage magic show.)

    Jack Pullan’s Boomerangs rerun for the 2nd is built on the one bit of statistical mechanics that everybody knows, that something or other about entropy always increasing. It’s not a quantum mechanics rule, but it’s a natural confusion. Quantum mechanics has the reputation as the source of all the most solid, irrefutable laws of the universe’s working. Statistical mechanics and thermodynamics have this musty odor of 19th-century steam engines, no matter how much there is to learn from there. Anyway, the collapse of systems into disorder is not an irrevocable thing. It takes only energy or luck to overcome disorderliness. And in many cases we can substitute time for luck.

    Scott Hilburn’s The Argyle Sweater for the 3rd is the anthropomorphic-geometry-figure joke that’s I’ve been waiting for. I had thought Hilburn did this all the time, although a quick review of Reading the Comics posts suggests he’s been more about anthropomorphic numerals the past year. This is why I log even the boring strips: you never know when I’ll need to check the last time Scott Hilburn used “acute” to mean “cute” in reference to triangles.

    Mike Thompson’s Grand Avenue uses some arithmetic as the visual cue for “any old kind of schoolwork, really”. Steve Breen’s name seems to have gone entirely from the comic strip. On Usenet group rec.arts.comics.strips Brian Henke found that Breen’s name hasn’t actually been on the comic strip since May, and D D Degg found a July 2014 interview indicating Thompson had mostly taken the strip over from originator Breen.

    Mark Anderson’s Andertoons for the 5th is another name-drop that doesn’t have any real mathematics content. But come on, we’re talking Andertoons here. If I skipped it the world might end or something untoward like that.

    'Now for my math homework. I've got a comfortable chair, a good light, plenty of paper, a sharp pencil, a new eraser, and a terrific urge to go out and play some ball.'

    Ted Shearer’s Quincy for the 14th of November, 1977, and reprinted the 7th of January, 2017. I kind of remember having a lamp like that. I don’t remember ever sitting down to do my mathematics homework with a paintbrush.

    Ted Shearer’s Quincy for the 14th of November, 1977, doesn’t have any mathematical content really. Just a mention. But I need some kind of visual appeal for this essay and Shearer is usually good for that.

    Corey Pandolph, Phil Frank, and Joe Troise’s The Elderberries rerun for the 7th is also a very marginal mention. But, what the heck, it’s got some of your standard wordplay about angles and it’ll get this week’s essay that much closer to 800 words.

     
  • Joseph Nebus 6:00 pm on Saturday, 31 December, 2016 Permalink | Reply
    Tags: 19th Century, , Axiom of Choice, continuum hypothesis, Crisis of Foundations, , logic, , ZFC   

    The End 2016 Mathematics A To Z: Zermelo-Fraenkel Axioms 


    gaurish gave me a choice for the Z-term to finish off the End 2016 A To Z. I appreciate it. I’m picking the more abstract thing because I’m not sure that I can explain zero briefly. The foundations of mathematics are a lot easier.

    Zermelo-Fraenkel Axioms

    I remember the look on my father’s face when I asked if he’d tell me what he knew about sets. He misheard what I was asking about. When we had that straightened out my father admitted that he didn’t know anything particular. I thanked him and went off disappointed. In hindsight, I kind of understand why everyone treated me like that in middle school.

    My father’s always quick to dismiss how much mathematics he knows, or could understand. It’s a common habit. But in this case he was probably right. I knew a bit about set theory as a kid because I came to mathematics late in the “New Math” wave. Sets were seen as fundamental to why mathematics worked without being so exotic that kids couldn’t understand them. Perhaps so; both my love and I delighted in what we got of set theory as kids. But if you grew up before that stuff was popular you probably had a vague, intuitive, and imprecise idea of what sets were. Mathematicians had only a vague, intuitive, and imprecise idea of what sets were through to the late 19th century.

    And then came what mathematics majors hear of as the Crisis of Foundations. (Or a similar name, like Foundational Crisis. I suspect there are dialect differences here.) It reflected mathematics taking seriously one of its ideals: that everything in it could be deduced from clearly stated axioms and definitions using logically rigorous arguments. As often happens, taking one’s ideals seriously produces great turmoil and strife.

    Before about 1900 we could get away with saying that a set was a bunch of things which all satisfied some description. That’s how I would describe it to a new acquaintance if I didn’t want to be treated like I was in middle school. The definition is fine if we don’t look at it too hard. “The set of all roots of this polynomial”. “The set of all rectangles with area 2”. “The set of all animals with four-fingered front paws”. “The set of all houses in Central New Jersey that are yellow”. That’s all fine.

    And then if we try to be logically rigorous we get problems. We always did, though. They’re embodied by ancient jokes like the person from Crete who declared that all Cretans always lie; is the statement true? Or the slightly less ancient joke about the barber who shaves only the men who do not shave themselves; does he shave himself? If not jokes these should at least be puzzles faced in fairy-tale quests. Logicians dressed this up some. Bertrand Russell gave us the quite respectable “The set consisting of all sets which are not members of themselves”, and asked us to stare hard into that set. To this we have only one logical response, which is to shout, “Look at that big, distracting thing!” and run away. This satisfies the problem only for a while.

    The while ended in — well, that took a while too. But between 1908 and the early 1920s Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem paused from arguing whose name would also be the best indie rock band name long enough to put set theory right. Their structure is known as Zermelo-Fraenkel Set Theory, or ZF. It gives us a reliable base for set theory that avoids any contradictions or catastrophic pitfalls. Or does so far as we have found in a century of work.

    It’s built on a set of axioms, of course. Most of them are uncontroversial, things like declaring two sets are equivalent if they have the same elements. Declaring that the union of sets is itself a set. Obvious, sure, but it’s the obvious things that we have to make axioms. Maybe you could start an argument about whether we should just assume there exists some infinitely large set. But if we’re aware sets probably have something to teach us about numbers, and that numbers can get infinitely large, then it seems fair to suppose that there must be some infinitely large set. The axioms that aren’t simple obvious things like that are too useful to do without. They assume stuff like that no set is an element of itself. Or that every set has a “power set”, a new set comprised of all the subsets of the original set. Good stuff to know.

    There is one axiom that’s controversial. Not controversial the way Euclid’s Parallel Postulate was. That’s ugly one about lines crossing another line meeting on the same side they make angles smaller than something something or other. That axiom was controversial because it read so weird, so needlessly complicated. (It isn’t; it’s exactly as complicated as it must be. Or better, it’s as simple as it could possibly be and still be useful.) The controversial axiom of Zermelo-Fraenkel Set Theory is known as the Axiom of Choice. It says if we have a collection of mutually disjoint sets, each with at least one thing in them, then it’s possible to pick exactly one item from each of the sets.

    It’s impossible to dispute this is what we have axioms for. It’s about something that feels like it should be obvious: we can always pick something from a set. How could this not be true?

    If it is true, though, we get some unsavory conclusions. For example, it becomes possible to take a ball the size of an orange and slice it up. We slice using mathematical blades. They’re not halted by something as petty as the desire not to slice atoms down the middle. We can reassemble the pieces. Into two balls. And worse, it doesn’t require we do something like cut the orange into infinitely many pieces. We expect crazy things to happen when we let infinities get involved. No, though, we can do this cut-and-duplicate thing by cutting the orange into five pieces. When you hear that it’s hard to know whether to point to the big, distracting thing and run away. If we dump the Axiom of Choice we don’t have that problem. But can we do anything useful without the ability to make a choice like that?

    And we’ve learned that we can. If we want to use the Zermelo-Fraenkel Set Theory with the Axiom of Choice we say we were working in “ZFC”, Zermelo-Fraenkel-with-Choice. We don’t have to. If we don’t want to make any assumption about choices we say we’re working in “ZF”. Which to use depends on what one wants to use.

    Either way Zermelo and Fraenkel and Skolem established set theory on the foundation we use to this day. We’re not required to use them, no; there’s a construction called von Neumann-Bernays-Gödel Set Theory that’s supposed to be more elegant. They didn’t mention it in my logic classes that I remember, though.

    And still there’s important stuff we would like to know which even ZFC can’t answer. The most famous of these is the continuum hypothesis. Everyone knows — excuse me. That’s wrong. Everyone who would be reading a pop mathematics blog knows there are different-sized infinitely-large sets. And knows that the set of integers is smaller than the set of real numbers. The question is: is there a set bigger than the integers yet smaller than the real numbers? The Continuum Hypothesis says there is not.

    Zermelo-Fraenkel Set Theory, even though it’s all about the properties of sets, can’t tell us if the Continuum Hypothesis is true. But that’s all right; it can’t tell us if it’s false, either. Whether the Continuum Hypothesis is true or false stands independent of the rest of the theory. We can assume whichever state is more useful for our work.

    Back to the ideals of mathematics. One question that produced the Crisis of Foundations was consistency. How do we know our axioms don’t contain a contradiction? It’s hard to say. Typically a set of axioms we can prove consistent are also a set too boring to do anything useful in. Zermelo-Fraenkel Set Theory, with or without the Axiom of Choice, has a lot of interesting results. Do we know the axioms are consistent?

    No, not yet. We know some of the axioms are mutually consistent, at least. And we have some results which, if true, would prove the axioms to be consistent. We don’t know if they’re true. Mathematicians are generally confident that these axioms are consistent. Mostly on the grounds that if there were a problem something would have turned up by now. It’s withstood all the obvious faults. But the universe is vaster than we imagine. We could be wrong.

    It’s hard to live up to our ideals. After a generation of valiant struggling we settle into hoping we’re doing good enough. And waiting for some brilliant mind that can get us a bit closer to what we ought to be.

     
    • elkement (Elke Stangl) 10:42 am on Sunday, 1 January, 2017 Permalink | Reply

      Very interesting – as usual! I was also subjected to the New Math in elementary school – the upside was that you got a lot of nice toys for free, as ‘add-ons’ to school books ( … plastic cubes and other toy blocks that should represents members of sets …). Not sure if it prepared one better to understand Russell’s paradox later ;-)

      Liked by 1 person

      • elkement (Elke Stangl) 10:43 am on Sunday, 1 January, 2017 Permalink | Reply

        … and I wish you a Happy New Year and more A-Zs in 2017 :-)

        Liked by 1 person

        • Joseph Nebus 5:34 am on Thursday, 5 January, 2017 Permalink | Reply

          Thanks kindly. I am going to do a fresh A-to-Z, although I don’t know just when. Not in January; haven’t got the energy for it right away.

          Liked by 1 person

      • Joseph Nebus 5:34 am on Thursday, 5 January, 2017 Permalink | Reply

        Oh, now, the toys were fantastic. I suppose it’s a fair guess whether the people who got something out of the New Math got it because they understood fundamentals better in that form or whether it was just that the toys and games made the subject more engaging.

        I am, I admit, a fan of the New Math, but that may just be because it’s the way I learned mathematics, and the way you did something as a kid is always the one natural way to do it.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Monday, 7 November, 2016 Permalink | Reply
    Tags: , , , , , , logic,   

    The End 2016 Mathematics A To Z: Cantor’s Middle Third 


    Today’s term is a request, the first of this series. It comes from HowardAt58, head of the Saving School Math blog. There are many letters not yet claimed; if you have a term you’d like to see my write about please head over to the “Any Requests?” page and pick a letter. Please not one I figure to get to in the next day or two.

    Cantor’s Middle Third.

    I think one could make a defensible history of mathematics by describing it as a series of ridiculous things that get discovered. And then, by thinking about these ridiculous things long enough, mathematicians come to accept them. Even rely on them. Sometime later the public even comes to accept them. I don’t mean to say getting people to accept ridiculous things is the point of mathematics. But there is a pattern which happens.

    Consider. People doing mathematics came to see how a number could be detached from a count or a measure of things. That we can do work on, say, “three” whether it’s three people, three kilograms, or three square meters. We’re so used to this it’s only when we try teaching mathematics to the young we realize it isn’t obvious.

    Or consider that we can have, rather than a whole number of things, a fraction. Some part of a thing, as if you could have one-half pieces of chalk or two-thirds a fruit. Counting is relatively obvious; fractions are something novel but important.

    We have “zero”; somehow, the lack of something is still a number, the way two or five or one-half might be. For that matter, “one” is a number. How can something that isn’t numerous be a number? We’re used to it anyway. We can have not just fraction and one and zero but irrational numbers, ones that can’t be represented as a fraction. We have negative numbers, somehow a lack of whatever we were counting so great that we might add some of what we were counting to the pile and still have nothing.

    That takes us up to about eight hundred years ago or something like that. The public’s gotten to accept all this as recently as maybe three hundred years ago. They’ve still got doubts. I don’t blame folks. Complex numbers mathematicians like; the public’s still getting used to the idea, but at least they’ve heard of them.

    Cantor’s Middle Third is part of the current edge. It’s something mathematicians are aware of and that defies sense at least. But we’ve come to accept it. The public, well, they don’t know about it. Maybe some do; it turns up in pop mathematics books that like sharing the strangeness of infinities. Few people read them. Sometimes it feels like all those who do go online to tell mathematicians they’re crazy. It comes to us, as you might guess from the name, from Georg Cantor. Cantor established the modern mathematical concept of how to study infinitely large sets in the late 19th century. And he was repeatedly hospitalized for depression. It’s cruel to write all that off as “and he was crazy”. His work’s withstood a hundred and thirty-five years of extremely smart people looking at it skeptically.

    The Middle Third starts out easily enough. Take a line segment. Then chop it into three equal pieces and throw away the middle third. You see where the name comes from. What do you have left? Some of the original line. Two-thirds of the original line length. A big gap in the middle.

    Now take the two line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the two pieces. Now we’re left with four chunks of line and four-ninths of the original length. One big and two little gaps in the middle.

    Now take the four little line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the four pieces. We’re left with eight chunks of line, about eight-twenty-sevenths of the original length. Lots of little gaps. Keep doing this, chopping up line segments and throwing away middle pieces. Never stop. Well, pretend you never stop and imagine what’s left.

    What’s left is deeply weird. What’s left has no length, no measure. That’s easy enough to prove. But we haven’t thrown everything away. There are bits of the original line segment left over. The left endpoint of the original line is left behind. So is the right endpoint of the original line. The endpoints of the line segments after the first time we chopped out a third? Those are left behind. The endpoints of the line segments after chopping out a third the second time, the third time? Those have to be in the set. We have a dust, isolated little spots of the original line, none of them combining together to cover any length. And there are infinitely many of these isolated dots.

    We’ve seen that before. At least we have if we’ve read anything about the Cantor Diagonal Argument. You can find that among the first ten posts of every mathematics blog. (Not this one. I was saving the subject until I had something good to say about it. Then I realized many bloggers have covered it better than I could.) Part of it is pondering how there can be a set of infinitely many things that don’t cover any length. The whole numbers are such a set and it seems reasonable they don’t cover any length. The rational numbers, though, are also an infinitely-large set that doesn’t cover any length. And there’s exactly as many rational numbers as there are whole numbers. This is unsettling but if you’re the sort of person who reads about infinities you come to accept it. Or you get into arguments with mathematicians online and never know you’ve lost.

    Here’s where things get weird. How many bits of dust are there in this middle third set? It seems like it should be countable, the same size as the whole numbers. After all, we pick up some of these points every time we throw away a middle third. So we double the number of points left behind every time we throw away a middle third. That’s countable, right?

    It’s not. We can prove it. The proof looks uncannily like that of the Cantor Diagonal Argument. That’s the one that proves there are more real numbers than there are whole numbers. There are points in this leftover set that were not endpoints of any of these middle-third excerpts. This dust has more points in it than there are rational numbers, but it covers no length.

    (I don’t know if the dust has the same size as the real numbers. I suspect it’s unproved whether it has or hasn’t, because otherwise I’d surely be able to find the answer easily.)

    It’s got other neat properties. It’s a fractal, which is why someone might have heard of it, back in the Great Fractal Land Rush of the 80s and 90s. Look closely at part of this set and it looks like the original set, with bits of dust edging gaps of bigger and smaller sizes. It’s got a fractal dimension, or “Hausdorff dimension” in the lingo, that’s the logarithm of two divided by the logarithm of three. That’s a number actually known to be transcendental, which is reassuring. Nearly all numbers are transcendental, but we only know a few examples of them.

    HowardAt58 asked me about the Middle Third set, and that’s how I’ve referred to it here. It’s more often called the “Cantor set” or “Cantor comb”. The “comb” makes sense because if you draw successive middle-thirds-thrown-away, one after the other, you get something that looks kind of like a hair comb, if you squint.

    You can build sets like this that aren’t based around thirds. You can, for example, develop one by cutting lines into five chunks and throw away the second and fourth. You get results that are similar, and similarly heady, but different. They’re all astounding. They’re all hard to believe in yet. They may get to be stuff we just accept as part of how mathematics works.

     
  • Joseph Nebus 6:00 pm on Tuesday, 11 October, 2016 Permalink | Reply
    Tags: , , logic,   

    Reading the Comics, October 8, 2016: Split Week Edition Part 2 


    And now I can finish off last week’s comics. It was a busy week. The first few days of this week have been pretty busy too. Meanwhile, Dave Kingsbury has recently read a biography of Lewis Carroll, and been inspired to form a haiku/tanka project. You might enjoy.

    Susan Camilleri Konar is a new cartoonist for the Six Chix collective. Her first strip to get mentioned around these parts is from the 5th. It’s a casual mention of the Fibonacci sequence, which is one of the few sequences that a normal audience would recognize as something going on forever. And yes, I noticed the spiral in the background. That’s one of the common visual representations of the Fibonacci sequence: it starts from the center. The rectangles inside have dimensions 1 by 2, then 2 by 3, then 3 by 5, then 5 by 8, and so on; the spiral connects vertices of these rectangles. It’s an attractive spiral and you can derive the overrated Golden Ratio from the dimensions of larger rectangles. This doesn’t make the Golden Ratio important or anything, but it is there.

    'It seems like Fibonacci's been entering his password for days now.'

    Susan Camilleri Konar ‘s Six Chix for the 5th of October, 2016. And yet what distracts me is both how much food Fibonacci has on his desk and how much of it is hidden behind his computer where he can’t get at it. He’s going to end up spilling his coffee on something important fiddling around like that. And that’s not even getting at his computer being this weird angle relative to the walls.

    Ryan North’s Dinosaur Comics for the 6th is part of a story about T-Rex looking for certain truth. Mathematics could hardly avoid coming up. And it does offer what look like universal truths: given the way deductive logic works, and some starting axioms, various things must follow. “1 + 1 = 2” is among them. But there are limits to how much that tells us. If we accept the rules of Monopoly, then owning four railroads means the rent for landing on one is a game-useful $200. But if nobody around you cares about Monopoly, so what? And so it is with mathematics. Utahraptor and Dromiceiomimus point out that the mathematics we know is built on premises we have selected because we find them interesting or useful. We can’t know that the mathematics we’ve deduced has any particular relevance to reality. Indeed, it’s worse than North points out: How do we know whether an argument is valid? Because we believe that its conclusions follow from its premises according to our rules of deduction. We rely on our possibly deceptive senses to tell us what the argument even was. We rely on a mind possibly upset by an undigested bit of beef, a crumb of cheese, or a fragment of an underdone potato to tell us the rules are satisfied. Mathematics seems to offer us absolute truths, but it’s hard to see how we can get there.

    Rick Stromoskis Soup to Nutz for the 6th has a mathematics cameo in a student-resisting-class-questions problem. But the teacher’s question is related to the figure that made my first fame around these parts.

    Mark Anderson’s Andertoons for the 7th is the long-awaited Andertoon for last week. It is hard getting education in through all the overhead.

    Bill Watterson’s Calvin and Hobbes rerun for the 7th is a basic joke about Calvin’s lousy student work. Fun enough. Calvin does show off one of those important skills mathematicians learn, though. He does do a sanity check. He may not know what 12 + 7 and 3 + 4 are, but he does notice that 12 + 7 has to be something larger than 3 + 4. That’s a starting point. It’s often helpful before starting work on a problem to have some idea of what you think the answer should be.

     
    • davekingsbury 5:57 pm on Wednesday, 12 October, 2016 Permalink | Reply

      Thank you for the mention. Good advice about starting work on a problem knowing roughly what the answer is … though my post demonstrated the opposite!

      Like

      • Joseph Nebus 3:43 am on Saturday, 15 October, 2016 Permalink | Reply

        Quite welcome. And, well, usually having an idea what answer you expect helps. Sometimes it misfires, I admit. But all rules of thumb sometimes misfire. If your expectation misfires it’s probably because you expect the answer to be something that’s not just wrong, but wrong in a significant way. That is, not wrong because you’re thinking 12 when it should be 14, but rather wrong because you’re thinking 12 when you should be thinking of doughnut shapes. But figuring that out is another big learning experience.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Thursday, 7 July, 2016 Permalink | Reply
    Tags: , logic, , , , ,   

    Theorem Thursday: The Jordan Curve Theorem 


    There are many theorems that you have to get fairly far into mathematics to even hear of. Often they involve things that are so abstract and abstruse that it’s hard to parse just what we’re studying. This week’s entry is not one of them.

    The Jordan Curve Theorem.

    There are a couple of ways to write this. I’m going to fall back on the version that Richard Courant and Herbert Robbins put in the great book What Is Mathematics?. It’s a theorem in the field of topology, the study of how shapes interact. In particular it’s about simple, closed curves on a plane. A curve is just what you figure it should be. It’s closed if it … uh … closes, makes a complete loop. It’s simple if it doesn’t cross itself or have any disconnected bits. So, something you could draw without lifting pencil from paper and without crossing back over yourself. Have all that? Good. Here’s the theorem:

    A simple closed curve in the plane divides that plane into exactly two domains, an inside and an outside.

    It’s named for Camille Jordan, a French mathematician who lived from 1838 to 1922, and who’s renowned for work in group theory and topology. It’s a different Jordan from the one named in Gauss-Jordan Elimination, which is a matrix thing that’s important but tedious. It’s also a different Jordan from Jordan Algebras, which I remember hearing about somewhere.

    The Jordan Curve Theorem is proved by reading its proposition and then saying, “Duh”. This is compelling, although it lacks rigor. It’s obvious if your curve is a circle, or a slightly squished circle, or a rectangle or something like that. It’s less obvious if your curve is a complicated labyrinth-type shape.

    A labyrinth drawn in straight and slightly looped lines.

    A generic complicated maze shape. Can you pick out which part is the inside and which the outside? Pretend you don’t notice that little peninsula thing in the upper right corner. I didn’t mean the line to overlap itself but I was using too thick a brush in ArtRage and didn’t notice before I’d exported the image.

    It gets downright hard if the curve has a lot of corners. This is why a completely satisfying rigorous proof took decades to find. There are curves that are nowhere differentiable, that are nothing but corners, and those are hard to deal with. If you think there’s no such thing, then remember the Koch Snowflake. That’s that triangle sticking up from the middle of a straight line, that itself has triangles sticking up in the middle of its straight lines, and littler triangles still sticking up from the straight lines. Carry that on forever and you have a shape that’s continuous but always changing direction, and this is hard to deal with.

    Still, you can have a good bit of fun drawing a complicated figure, then picking a point and trying to work out whether it’s inside or outside the curve. The challenging way to do that is to view your figure as a maze and look for a path leading outside. The easy way is to draw a new line. I recommend doing that in a different color.

    In particular, draw a line from your target point to the outside. Some definitely outside point. You need the line to not be parallel to any of the curve’s line segments. And it’s easier if you don’t happen to intersect any vertices, but if you must, we’ll deal with that two paragraphs down.

    A dot with a testing line that crosses the labyrinth curve six times, and therefore is outside the curve.

    A red dot that turns out to be outside the labyrinth, based on the number of times the testing line, in blue, crosses the curve. I learned doing this that I should have drawn the dot and blue line first and then fit a curve around it so I wouldn’t have to work so hard to find one lousy point and line segment that didn’t have some problems.

    So draw your testing line here from the point to something definitely outside. And count how many times your testing line crosses the original curve. If the testing line crosses the original curve an even number of times then the original point was outside the curve. If the testing line crosses the original an odd number of times then the original point was inside of the curve. Done.

    If your testing line touches a vertex, well, then it gets fussy. It depends whether the two edges of the curve that go into that vertex stay on the same side as your testing line. If the original curve’s edges stay on the same side of your testing line, then don’t count that as a crossing. If the edges go on opposite sides of the testing line, then that does count as one crossing. With that in mind, carry on like you did before. An even number of crossings means your point was outside. An odd number of crossings means your point was inside.

    The testing line touches a corner of the curve. The curve comes up to and goes away from the same side as the testing line.

    This? Doesn’t count as the blue testing line crossing the black curve.


    The testing line touches a corner of the curve. The curve crosses over, with legs on either side of the testing line at that point.

    This? This counts as the blue testing line crossing the black curve.

    So go ahead and do this a couple times with a few labyrinths and sample points. It’s fun and elevates your doodling to the heights of 19th-century mathematics. Also once you’ve done that a couple times you’ve proved the Jordan curve theorem.

    Well, no, not quite. But you are most of the way to proving it for a special case. If the curve is a polygon, a shape made up of a finite number of line segments, then you’ve got almost all the proof done. You have to finish it off by choosing a ray, a direction, that isn’t parallel to any of the polygon’s line segments. (This is one reason this method only works for polygons, and fails for stuff like the Koch Snowflake. It also doesn’t work well with space-filling curves, which are things that exist. Yes, those are what they sound like: lines that squiggle around so much they fill up area. Some can fill volume. I swear. It’s fractal stuff.) Imagine all the lines that are parallel to that ray. There’s definitely some point along that line that’s outside the curve. You’ll need that for reference. Classify all the points on that line by whether there’s an even or an odd number of crossings between a starting point and your reference definitely-outside point. Keep doing that for all these many parallel lines.

    And that’s it. The mess of points that have an odd number of intersections are the inside. The mess of points that have an even number of intersections are the outside.

    You won’t be surprised to know there’s versions of the Jordan curve theorem for solid objects in three-dimensional space. And for hyperdimensional spaces too. You can always work out an inside and an outside, as long as space isn’t being all weird. But it might sound like it’s not much of a theorem. So you can work out an inside and an outside; so what?

    But it’s one of those great utility theorems. It pops in to places, the perfect tool for a problem you were just starting to notice existed. If I can get my rhetoric organized I hope to show that off next week, when I figure to do the Five-Color Map Theorem.

     
    • howardat58 7:00 pm on Thursday, 7 July, 2016 Permalink | Reply

      Richard Courant and Herbert Robbins: What Is Mathematics?.

      My bedside book, since 1961.

      Liked by 2 people

      • Joseph Nebus 4:10 am on Saturday, 9 July, 2016 Permalink | Reply

        I’d first read it as an undergraduate and it was one of my first online book purchases. I do keep dipping into it and finding things I feel like I should write about here. But then I have to think of something to add to it. In my case, that’s jokes, mostly.

        Like

    • mathtuition88 4:45 am on Friday, 8 July, 2016 Permalink | Reply

      Very interesting. Jordan Curve Theorem shows the rigor of math in action.

      Like

      • Joseph Nebus 4:15 am on Saturday, 9 July, 2016 Permalink | Reply

        I like it for being the sort of theorem that seems too obvious to be useful. I have got it scheduled to be used in next Thursday’s post.

        Liked by 1 person

    • Mark Jackson 12:18 am on Sunday, 17 July, 2016 Permalink | Reply

      “You won’t be surprised to know there’s versions of the Jordan curve theorem for solid objects in three-dimensional space.” Not that I ought to doubt this, but the counterintuitive discovery that the 3-sphere can be everted sprang to mind, and now I’m worried.

      Like

      • Joseph Nebus 4:42 pm on Wednesday, 20 July, 2016 Permalink | Reply

        It’s a good worry and I’ll admit this is getting deeper into topology than I’m trained in. My suspicion is that the possible self-intersections of a sphere being turned inside-out cause it to fall outside the bounds of the Jordan-Brouwer Separation Theorem. I don’t have a good argument that has to be the case though; that’s just where I would start looking.

        Like

  • Joseph Nebus 3:00 pm on Thursday, 30 June, 2016 Permalink | Reply
    Tags: , factorials, , logic, , , , ,   

    Theorem Thursday: Liouville’s Approximation Theorem And How To Make Your Own Transcendental Number 


    As I get into the second month of Theorem Thursdays I have, I think, the whole roster of weeks sketched out. Today, I want to dive into some real analysis, and the study of numbers. It’s the sort of thing you normally get only if you’re willing to be a mathematics major. I’ll try to be readable by people who aren’t. If you carry through to the end and follow directions you’ll have your very own mathematical construct, too, so enjoy.

    Liouville’s Approximation Theorem

    It all comes back to polynomials. Of course it does. Polynomials aren’t literally everything in mathematics. They just come close. Among the things we can do with polynomials is divide up the real numbers into different sets. The tool we use is polynomials with integer coefficients. Integers are the positive and the negative whole numbers, stuff like ‘4’ and ‘5’ and ‘-12’ and ‘0’.

    A polynomial is the sum of a bunch of products of coefficients multiplied by a variable raised to a power. We can use anything for the variable’s name. So we use ‘x’. Sometimes ‘t’. If we want complex-valued polynomials we use ‘z’. Some people trying to make a point will use ‘y’ or ‘s’ but they’re just showing off. Coefficients are just numbers. If we know the numbers, great. If we don’t know the numbers, or we want to write something that doesn’t commit us to any particular numbers, we use letters from the start of the alphabet. So we use ‘a’, maybe ‘b’ if we must. If we need a lot of numbers, we use subscripts: a0, a1, a2, and so on, up to some an for some big whole number n. To talk about one of these without committing ourselves to a specific example we use a subscript of i or j or k: aj, ak. It’s possible that aj and ak equal each other, but they don’t have to, unless j and k are the same whole number. They might also be zero, but they don’t have to be. They can be any numbers. Or, for this essay, they can be any integers. So we’d write a generic polynomial f(x) as:

    f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_{n - 1}x^{n - 1} + a_n x^n

    (Some people put the coefficients in the other order, that is, a_n + a_{n - 1}x + a_{n - 2}x^2 and so on. That’s not wrong. The name we give a number doesn’t matter. But it makes it harder to remember what coefficient matches up with, say, x14.)

    A zero, or root, is a value for the variable (‘x’, or ‘t’, or what have you) which makes the polynomial equal to zero. It’s possible that ‘0’ is a zero, but don’t count on it. A polynomial of degree n — meaning the highest power to which x is raised is n — can have up to n different real-valued roots. All we’re going to care about is one.

    Rational numbers are what we get by dividing one whole number by another. They’re numbers like 1/2 and 5/3 and 6. They’re numbers like -2.5 and 1.0625 and negative a billion. Almost none of the real numbers are rational numbers; they’re exceptional freaks. But they are all the numbers we actually compute with, once we start working out digits. Thus we remember that to live is to live paradoxically.

    And every rational number is a root of a first-degree polynomial. That is, there’s some polynomial f(x) = a_0 + a_1 x that’s made zero for your polynomial. It’s easy to tell you what it is, too. Pick your rational number. You can write that as the integer p divided by the integer q. Now look at the polynomial f(x) = p – q x. Astounded yet?

    That trick will work for any rational number. It won’t work for any irrational number. There’s no first-degree polynomial with integer coefficients that has the square root of two as a root. There are polynomials that do, though. There’s f(x) = 2 – x2. You can find the square root of two as the zero of a second-degree polynomial. You can’t find it as the zero of any lower-degree polynomials. So we say that this is an algebraic number of the second degree.

    This goes on higher. Look at the cube root of 2. That’s another irrational number, so no first-degree polynomials have it as a root. And there’s no second-degree polynomials that have it as a root, not if we stick to integer coefficients. Ah, but f(x) = 2 – x3? That’s got it. So the cube root of two is an algebraic number of degree three.

    We can go on like this, although I admit examples for higher-order algebraic numbers start getting hard to justify. Most of the numbers people have heard of are either rational or are order-two algebraic numbers. I can tell you truly that the eighth root of two is an eighth-degree algebraic number. But I bet you don’t feel enlightened. At best you feel like I’m setting up for something. The number r(5), the smallest radius a disc can have so that five of them will completely cover a disc of radius 1, is eighth-degree and that’s interesting. But you never imagined the number before and don’t have any idea how big that is, other than “I guess that has to be smaller than 1”. (It’s just a touch less than 0.61.) I sound like I’m wasting your time, although you might start doing little puzzles trying to make smaller coins cover larger ones. Do have fun.

    Liouville’s Approximation Theorem is about approximating algebraic numbers with rational ones. Almost everything we ever do is with rational numbers. That’s all right because we can make the difference between the number we want, even if it’s r(5), and the numbers we can compute with, rational numbers, as tiny as we need. We trust that the errors we make from this approximation will stay small. And then we discover chaos science. Nothing is perfect.

    For example, suppose we need to estimate π. Everyone knows we can approximate this with the rational number 22/7. That’s about 3.142857, which is all right but nothing great. Some people know we can approximate it as 333/106. (I didn’t until I started writing this paragraph and did some research.) That’s about 3.141509, which is better. Then there’s 355/113, which is not as famous as 22/7 but is a celebrity compared to 333/106. That’s about 3.141529. Then we get into some numbers only mathematics hipsters know: 103993/33102 and 104348/33215 and so on. Fine.

    The Liouville Approximation Theorem is about sequences that converge on an irrational number. So we have our first approximation x1, that’s the integer p1 divided by the integer q1. So, 22 and 7. Then there’s the next approximation x2, that’s the integer p2 divided by the integer q2. So, 333 and 106. Then there’s the next approximation yet, x3, that’s the integer p3 divided by the integer q3. As we look at more and more approximations, xj‘s, we get closer and closer to the actual irrational number we want, in this case π. Also, the denominators, the qj‘s, keep getting bigger.

    The theorem speaks of having an algebraic number, call it x, of some degree n greater than 1. Then we have this limit on how good an approximation can be. The difference between the number x that we want, and our best approximation p / q, has to be larger than the number (1/q)n + 1. The approximation might be higher than x. It might be lower than x. But it will be off by at least the n-plus-first power of 1/q.

    Polynomials let us separate the real numbers into infinitely many tiers of numbers. They also let us say how well the most accessible tier of numbers, rational numbers, can approximate these more exotic things.

    One of the things we learn by looking at numbers through this polynomial screen is that there are transcendental numbers. These are numbers that can’t be the root of any polynomial with integer coefficients. π is one of them. e is another. Nearly all numbers are transcendental. But the proof that any particular number is one is hard. Joseph Liouville showed that transcendental numbers must exist by using continued fractions. But this approximation theorem tells us how to make our own transcendental numbers. This won’t be any number you or anyone else has ever heard of, unless you pick a special case. But it will be yours.

    You will need:

    1. a1, an integer from 1 to 9, such as ‘1’, ‘9’, or ‘5’.
    2. a2, another integer from 1 to 9. It may be the same as a1 if you like, but it doesn’t have to be.
    3. a3, yet another integer from 1 to 9. It may be the same as a1 or a2 or, if it so happens, both.
    4. a4, one more integer from 1 to 9 and you know what? Let’s summarize things a bit.
    5. A whopping great big gob of integers aj, every one of them from 1 to 9, for every possible integer ‘j’ so technically this is infinitely many of them.
    6. Comfort with the notation n!, which is the factorial of n. For whole numbers that’s the product of every whole number from 1 to n, so, 2! is 1 times 2, or 2. 3! is 1 times 2 times 3, or 6. 4! is 1 times 2 times 3 times 4, or 24. And so on.
    7. Not to be thrown by me writing -n!. By that I mean work out n! and then multiply that by -1. So -2! is -2. -3! is -6. -4! is -24. And so on.

    Now, assemble them into your very own transcendental number z, by this formula:

    z = a_1 \cdot 10^{-1} + a_2 \cdot 10^{-2!} + a_3 \cdot 10^{-3!} + a_4 \cdot 10^{-4!} + a_5 \cdot 10^{-5!} + a_6 \cdot 10^{-6!} \cdots

    If you’ve done it right, this will look something like:

    z = 0.a_{1}a_{2}000a_{3}00000000000000000a_{4}0000000 \cdots

    Ah, but, how do you know this is transcendental? We can prove it is. The proof is by contradiction, which is how a lot of great proofs are done. We show nonsense follows if the thing isn’t true, so the thing must be true. (There are mathematicians that don’t care for proof-by-contradiction. They insist on proof by charging straight ahead and showing a thing is true directly. That’s a matter of taste. I think every mathematician feels that way sometimes, to some extent or on some issues. The proof-by-contradiction is easier, at least in this case.)

    Suppose that your z here is not transcendental. Then it’s got to be an algebraic number of degree n, for some finite number n. That’s what it means not to be transcendental. I don’t know what n is; I don’t care. There is some n and that’s enough.

    Now, let’s let zm be a rational number approximating z. We find this approximation by taking the first m! digits after the decimal point. So, z1 would be just the number 0.a1. z2 is the number 0.a1a2. z3 is the number 0.a1a2000a3. I don’t know what m you like, but that’s all right. We’ll pick a nice big m.

    So what’s the difference between z and zm? Well, it can’t be larger than 10 times 10-(m + 1)!. This is for the same reason that π minus 3.14 can’t be any bigger than 0.01.

    Now suppose we have the best possible rational approximation, p/q, of your number z. Its first m! digits are going to be p / 10m!. This will be zm And by the Liouville Approximation Theorem, then, the difference between z and zm has to be at least as big as (1/10m!)(n + 1).

    So we know the difference between z and zm has to be larger than one number. And it has to be smaller than another. Let me write those out.

    \frac{1}{10^{m! (n + 1)}} < |z - z_m | < \frac{10}{10^{(m + 1)!}}

    We don’t need the z – zm anymore. That thing on the rightmost side we can write what I’ll swear is a little easier to use. What we have left is:

    \frac{1}{10^{m! (n + 1)}} < \frac{1}{10^{(m + 1)! - 1}}

    And this will be true whenever the number m! (n + 1) is greater than (m + 1)! – 1 for big enough numbers m.

    But there’s the thing. This isn’t true whenever m is greater than n. So the difference between your alleged transcendental number and its best-possible rational approximation has to be simultaneously bigger than a number and smaller than that same number without being equal to it. Supposing your number is anything but transcendental produces nonsense. Therefore, congratulations! You have a transcendental number.

    If you chose all 1’s for your aj‘s, then you have what is sometimes called the Liouville Constant. If you didn’t, you may have a transcendental number nobody’s ever noticed before. You can name it after someone if you like. That’s as meaningful as naming a star for someone and cheaper. But you can style it as weaving someone’s name into the universal truth of mathematics. Enjoy!

    I’m glad to finally give you a mathematics essay that lets you make something you can keep.

     
    • Andrew Wearden 3:29 pm on Thursday, 30 June, 2016 Permalink | Reply

      Admittedly, I do have an undergrad math degree, but I thought you did a good job explaining this. Out of curiosity, is there a reason you can’t use the integer ‘0’ when creating a transcendental number?

      Liked by 1 person

      • Joseph Nebus 6:45 am on Sunday, 3 July, 2016 Permalink | Reply

        Thank you. I’m glad you followed.

        If I’m not missing a trick there’s no reason you can’t slip a couple of zeroes in to the transcendental number. But there is a problem if you have nothing but zeroes after some point. If, say, everything from a_9 on were zero, then you’d have a rational number, which is as un-transcendental as it gets. So it’s easier to build a number without electing zeroes rather than work out a rule that allows zeroes only in non-dangerous configurations.

        Like

  • Joseph Nebus 3:00 pm on Tuesday, 14 June, 2016 Permalink | Reply
    Tags: , , , , logic,   

    What’s The Shortest Proof I’ve Done? 


    I didn’t figure to have a bookend for last week’s “What’s The Longest Proof I’ve Done? question. I don’t keep track of these things, after all. And the length of a proof must be a fluid concept. If I show something is a direct consequence of a previous theorem, is the proof’s length the two lines of new material? Or is it all the proof of the previous theorem plus two new lines?

    I would think the shortest proof I’d done was showing that the logarithm of 1 is zero. This would be starting from the definition of the natural logarithm of a number x as the definite integral of 1/t on the interval from 1 to x. But that requires a bunch of analysis to support the proof. And the Intermediate Value Theorem. Does that stuff count? Why or why not?

    But this happened to cross my desk: The Shortest-Known Paper Published in a Serious Math Journal: Two Succinct Sentences, an essay by Dan Colman. It reprints a paper by L J Lander and T R Parkin which appeared in the Bulletin of the American Mathematical Society in 1966.

    It’s about Euler’s Sums of Powers Conjecture. This is a spinoff of Fermat’s Last Theorem. Leonhard Euler observed that you need at least two whole numbers so that their squares add up to a square. And you need three cubes of whole numbers to add up to the cube of a whole number. Euler speculated you needed four whole numbers so that their fourth powers add up to a fourth power, five whole numbers so that their fifth powers add up to a fifth power, and so on.

    And it’s not so. Lander and Parkin found that this conjecture is false. They did it the new old-fashioned way: they set a computer to test cases. And they found four whole numbers whose fifth powers add up to a fifth power. So the quite short paper answers a long-standing question, and would be hard to beat for accessibility.

    There is another famous short proof sometimes credited as the most wordless mathematical presentation. Frank Nelson Cole gave it on the 31st of October, 1903. It was about the Mersenne number 267-1, or in human notation, 147,573,952,589,676,412,927. It was already known the number wasn’t prime. (People wondered because numbers of the form 2n-1 often lead us to perfect numbers. And those are interesting.) But nobody knew which factors it was. Cole gave his talk by going up to the board, working out 267-1, and then moving to the other side of the board. There he wrote out 193,707,721 × 761,838,257,287, and showed what that was. Then, per legend, he sat down without ever saying a word, and took in the standing ovation.

    I don’t want to cast aspersions on a great story like that. But mathematics is full of great stories that aren’t quite so. And I notice that one of Cole’s doctoral students was Eric Temple Bell. Bell gave us a great many tales of mathematics history that are grand and great stories that just weren’t so. So I want it noted that I don’t know where we get this story from, or how it may have changed in the retellings. But Cole’s proof is correct, at least according to Octave.

    So not every proof is too long to fit in the universe. But then I notice that Mathworld’s page regarding the Euler Sum of Powers Conjecture doesn’t cite the 1966 paper. It cites instead Lander and Parkin’s “A Counterexample to Euler’s Sum of Powers Conjecture” from Mathematics of Computation volume 21, number 97, of 1967. There the paper has grown to three pages, although it’s only a couple paragraphs of one page and three lines of citation on the third. It’s not so easy to read either, but it does explain how they set about searching for counterexamples. But it may give you some better idea of how numerical mathematicians find things.

     
  • Joseph Nebus 3:00 pm on Tuesday, 7 June, 2016 Permalink | Reply
    Tags: , logic, , ,   

    What’s The Longest Proof I’ve Done? 


    You know what’s a question I’m surprised I don’t get asked? I mean in the context of being a person with an advanced mathematics degree. I don’t get asked what’s the longest proof I’ve ever done. Either just reading to understand, or proving for myself. Maybe people are too intimidated by the idea of advanced mathematics to try asking such things. Maybe they’re afraid I’d bury them under a mountain of technical details. But I’d imagine musicians get asked what the hardest or the longest piece they’ve memorized is. I’m sure artists get asked what’s the painting (or sculpture, or whatnot) they’ve worked on the longest was.

    It’s just as well nobody’s asked. I’m not sure what the longest proof I’ve done, or gone through, would even be. Some of it is because there’s an inherent arbitrariness to the concept of “a proof”. Proofs are arguments, and they’re almost always made up of many smaller pieces. The advantage of making these small pieces is that small proofs are usually easier to understand. We can then assemble the conclusions of many small proofs to make one large proof. But then how long was the large proof? Does it contain all the little proofs that go into it?

    And, truth be told, I didn’t think to pay attention to how long any given proof was. If I had to guess I would think the longest proof I’d done, just learned, would be from a grad school course in ordinary differential equations. This is the way we study systems in which how things are changing depends on what things are now. These often match physical, dynamic, systems very well. I remember in the class spending several two-hour sessions trying to get through a major statement in a field called Kolmogorov-Arnold-Moser Theory. This is a major statement about dynamical systems being perturbed, given a little shove. And it describes what conditions make the little shove really change the way the whole system behaves.

    What I’m getting to is that there appears to be a new world’s record-holder for the Longest Actually Completed Proof. It’s about a problem I never heard of before but that’s apparently been open since the 1980s. It’s known as the Boolean Pythagorean Triples problem. The MathsByAGirl blog has an essay about it, and gives some idea of its awesome size. It’s about 200 terabytes of text. As you might imagine, it’s a proof by exhaustion. That is, it divides up a problem into many separate cases, and tries out all the cases. That’s a legitimate approach. It tends to produce proofs that are long and easy to verify, at least at each particular case. They might not be insightful, that is, they might not suggest new stuff to do, but they work. (And I don’t know that this proof doesn’t suggest new stuff to do. I haven’t read it, for good reason. It’s well outside my specialty.)

    But proofs can be even bigger. John Carlos Baez published a while back an essay, “Insanely Long Proofs”. And that’s awe-inspiring. Baez is able to provide theorems which we know to be true. You’ll be able to understand what they conclude, too. And in the logic system applicable to them, their proofs would be so long that the entire universe isn’t big enough just to write down the number of symbols needed to complete the proof. Let me say that again. It’s not that writing out the proof would take more than all the space in the universe. It’s that writing out how long the proof would be, written out would take more than all the space in the universe.

    So you should ask, then how do we know it’s true? Baez explains.

     
    • MJ Howard 3:21 pm on Tuesday, 7 June, 2016 Permalink | Reply

      I think part of the problem is that, in general, non-mathematicians don’t have much of a concept of what working mathematicians actually do. Most of the work I do that I think of as Mathematics consists of specific applications and isn’t terribly concerned with proof as such.

      That said, the most time I spent working on a proof was as an undergrad. It was a plane tiling problem involving constraints on the dimensions of the plane. I spent about a week and a half on it and only managed to prove sufficiency.

      Liked by 1 person

      • Joseph Nebus 3:08 am on Saturday, 11 June, 2016 Permalink | Reply

        You’re right. It might also be that people don’t think much about what mathematicians do all day. I’m not perfectly clear on it myself, I must admit. But when I was a real working mathematician most of my research was really numerical simulations and experiments. There were a couple of little cases where I needed to prove something, but it was all in the service of either saying why my numerical experiments should work, or why a surprising result I’d found experimentally actually made sense after all.

        My biggest work in actually coming up with proofs might have been in a real analysis course I took as a grad student. I’d had a lovely open-ended assignment and kept chaining together little proofs about one problem to build a notebook of stuff. This was all proofs about logarithms and exponentials, so none of the results were anything remotely new or surprising, but it was really satisfying to get underneath some computation rules and work them out.

        Like

    • Amie 10:26 pm on Tuesday, 7 June, 2016 Permalink | Reply

      I’m more likely to be asked ‘what is the longest equation that I’ve solved’? :)

      Related to what MJ said, when I interview high-school students for undergraduate maths scholarships, I ask them what is the longest they have ever spent solving a problem. The answer is usually 15 minutes. Occasionally someone says overnight. That gives us one clue as to what non-mathematicians (albeit maths students) think it means to be good at maths and how mathematicians work (that is, solve problems relatively quickly and move on). To be fair, I don’t expect these students to answer any differently because they respond based on (1) their experience and (2) what they think we want to hear. But it is illuminating.

      I’d never heard of the Boolean Pythagorean Triples problem until earlier this week, either. I love that there are easy to understand maths ideas that I’ve never heard of. No idea how the proof works either ;).

      Liked by 1 person

      • Joseph Nebus 3:15 am on Saturday, 11 June, 2016 Permalink | Reply

        Longest equation that I’ve solved … hm. Well, if it’s the equation I spent the longest time in solving that’s got to be something in the inviscid fluid flow that made up a lot of my thesis. The physically longest equation I don’t know. I remember shortly after starting into high school algebra at all trying to think of the hardest possible equation. Given that all I really had to work with was polynomials my first guess was just something with a bunch of variables all raised to high powers. But I also worked out that this was a boring equation. Never did work out what would be both complicated and interesting at once.

        I wonder how long non-mathematicians expect gets spent on leads that ultimately go nowhere, before a workable approach to the problem is worked out. Or if not nowhere then at least go into directions that don’t work without a lot of re-thinking and re-casting. There is a desire to show how to get right answers efficiently, for which people can’t be blamed. But the system of learning how to think of ways to get answers probably needs false starts and long periods of pondering that feel like they don’t get anywhere.

        Liked by 1 person

    • mathsbyagirl 7:54 am on Saturday, 11 June, 2016 Permalink | Reply

      I must say, I love your style of writing!

      Like

  • Joseph Nebus 3:00 pm on Sunday, 5 June, 2016 Permalink | Reply
    Tags: , , logic,   

    Reading the Comics, June 3, 2016: Word Problems Without Pictures Edition 


    I haven’t got Sunday’s comics under review yet. But the past seven days were slow ones for mathematically-themed comics. Maybe Comic Strip Master Command is under the impression that it’s the (United States) summer break already. It’s not, although Funky Winkerbean did a goofy sequence graduating its non-player-character students. And Zits has been doing a summer reading storyline that only makes sense if Jeremy Duncan is well into summer. Maybe Comic Strip Master Command thinks it’s a month later than it actually is?

    Tony Cochrane’s Agnes for the 29th of May looks at first like a bit of nonsense wordplay. But whether a book with the subject “All About Books” would discuss itself, and how it would discuss itself, is a logic problem. And not just a logic problem. Start from pondering how the book All About Books would describe the content of itself. You can go from that to an argument that it’s impossible to compress every possible message. Imagine an All About Books which contained shorthand descriptions of every book. And the descriptions have enough detail to exactly reconstruct each original book. But then what would the book list for the description of All About Books?

    And self-referential things can lead to logic paradoxes swiftly. You’d have some fine ones if Agnes were to describe a book All About Not-Described Books. Is the book described in itself? The question again sounds silly. But thinking seriously about it leads us to the decidability problem. Any interesting-enough logical system will always have statements that are meaningful and true that no one can prove.

    Furthermore, the suggestion of an “All About `All About Books’ Book” suggests to me power sets. That’s the set of all the ways you can collect the elements of a set. Power sets are always bigger than the original set. They lead to the staggering idea that there are many sizes of infinitely large sets, a never-ending stack of bigness.

    Robb Armstrong’s Jump Start for the 31st of May is part of a sequence about getting a tutor for a struggling kid. That it’s mathematics is incidental to the storyline, must be said. (It’s an interesting storyline, partly about the Jojo’s father, a police officer, coming to trust Ray, an ex-convict. Jump Start tells many interesting and often deeply weird storylines. And it never loses its camouflage of being an ordinary family comic strip.) It uses the familiar gimmick of motivating a word problem by making it about something tangible.

    Ken Cursoe’s Tiny Sepuku for the 2nd of June uses the motif of non-Euclidean geometry as some supernatural magic. It’s a small reference, you might miss it. I suppose it is true that a high-dimensional analogue to conic sections would focus things from many dimensions. If those dimensions match time and space, maybe it would focus something from all humanity into the brain. I would try studying instead, though.

    Russell Myers’s Broom Hilda for the 3rd is a resisting-the-word-problems joke. It’s funny to figure on missing big if you have to be wrong at all. But something you learn in numerical mathematics, particularly, is that it’s all right to start from a guess. Often you can take a wrong answer and improve it. If you can’t get the exact right answer, you can usually get a better answer. And often you can get as good as you need. So in practice, sorry to say, I can’t recommend going for the ridiculous answer. You can do better.

     
    • seaangel4444 4:41 pm on Sunday, 5 June, 2016 Permalink | Reply

      LOL I love the Broom Hilda cartoon, Joseph! And here I am, “smiling”! :) Cher xo

      Like

      • Joseph Nebus 2:56 am on Saturday, 11 June, 2016 Permalink | Reply

        Aw, quite glad you like. I do enjoy doing these comic strip reviews, partly for the chance to talk about subjects, partly because people get to see strips they hadn’t noticed before.

        Liked by 1 person

  • Joseph Nebus 3:00 pm on Wednesday, 20 April, 2016 Permalink | Reply
    Tags: , boredom, , , logic, ,   

    A Leap Day 2016 Mathematics A To Z: Wlog 


    Wait for it.

    Wlog.

    I’d like to say a good word for boredom. It needs the good words. The emotional state has an appalling reputation. We think it’s the sad state someone’s in when they can’t find anything interesting. It’s not. It’s the state in which we are so desperate for engagement that anything is interesting enough.

    And that isn’t a bad thing! Finding something interesting enough is a precursor to noticing something curious. And curiosity is a precursor to discovery. And discovery is a precursor to seeing a fuller richness of the world.

    Think of being stuck in a waiting room, deprived of reading materials or a phone to play with or much of anything to do. But there is a clock. Your classic analog-face clock. Its long minute hand sweeps out the full 360 degrees of the circle once every hour, 24 times a day. Its short hour hand sweeps out that same arc every twelve hours, only twice a day. Why is the big unit of time marked with the short hand? Good question, I don’t know. Probably, ultimately, because it changes so much less than the minute hand that it doesn’t need the attention of length drawn to it.

    But let our waiting mathematician get a little more bored, and think more about the clock. The hour and minute hand must sometimes point in the same direction. They do at 12:00 by the clock, for example. And they will at … a little bit past 1:00, and a little more past 2:00, and a good while after 9:00, and so on. How many times during the day will they point the same direction?

    Well, one easy way to do this is to work out how long it takes the hands, once they’ve met, to meet up again. Presumably we don’t want to wait the whole hour-and-some-more-time for it. But how long is that? Well, we know the hands start out pointing the same direction at 12:00. The first time after that will be after 1:00. At exactly 1:00 the hour hand is 30 degrees clockwise of the minute hand. The minute hand will need five minutes to catch up to that. In those five minutes the hour hand will have moved another 2.5 degrees clockwise. The minute hand needs about four-tenths of a minute to catch up to that. In that time the hour hand moves — OK, we’re starting to see why Zeno was not an idiot. He never was.

    But we have this roughly worked out. It’s about one hour, five and a half minutes between one time the hands meet and the next. In the course of twelve hours there’ll be time for them to meet up … oh, of course, eleven times. Over the course of the day they’ll meet up 22 times and we can get into a fight over whether midnight counts as part of today, tomorrow, or both days, or neither. (The answer: pretend the day starts at 12:01.)

    Hold on, though. How do we know that the time between the hands meeting up at 12:00 and the one at about 1:05 is the same as the time between the hands meeting up near 1:05 and the next one, sometime a little after 2:10? Or between that one and the one at a little past 3:15? What grounds do we have for saying this one interval is a fair representation of them all?

    We can argue that it should be fairly enough. Imagine that all the markings were washed off the clock. It’s just two hands sweeping around in circles, one relatively fast, one relatively slow, forever. Give the clockface a spin. When the hands come together again rotate the clock so those two hands are vertical, the “12:00” position. Is this actually 12:00? … Well, we’ve got a one-in-eleven chance it is. It might be a little past 1:05; it might be that time something past 6:30. The movement of the clock hands gives no hint what time it really is.

    And that is why we’re justified taking this one interval as representative of them all. The rate at which the hands move, relative to each other, doesn’t depend on what the clock face behind it says. The rate is, if the clock isn’t broken, always the same. So we can use information about one special case that happens to be easy to work out to handle all the cases.

    That’s the mathematics term for this essay. We can study the one specific case without loss of generality, or as it’s inevitably abbreviated, wlog. This is the trick of studying something possibly complicated, possibly abstract, by looking for a representative case. That representative case may tell us everything we need to know, at least about this particular problem. Generality means what you might figure from the ordinary English meaning of it: it means this answer holds in general, as opposed to in this specific instance.

    Some thought has to go in to choosing the representative case. We have to pick something that doesn’t, somehow, miss out on a class of problems we would want to solve. We mustn’t lose the generality. And it’s an easy mistake to make, especially as a mathematics student first venturing into more abstract waters. I remember coming up against that often when trying to prove properties of infinitely long series. It’s so hard to reason something about a bunch of numbers whose identities I have no idea about; why can’t I just use the sequence, oh, 1/1, 1/2, 1/3, 1/4, et cetera and let that be good enough? Maybe 1/1, 1/4, 1/9, 1/16, et cetera for a second test, just in case? It’s because it takes time to learn how to safely handle infinities.

    It’s still worth doing. Few of us are good at manipulating things in the abstract. We have to spend more mental energy imagining the thing rather than asking the questions we want of it. Reducing that abstraction — even if it’s just a little bit, changing, say, from “an infinitely-differentiable function” to “a polynomial of high enough degree” — can rescue us. We can try out things we’re confident we understand, and derive from it things we don’t know.

    I can’t say that a bored person observing a clock would deduce all this. Parts of it, certainly. Maybe all, if she thought long enough. I believe it’s worth noticing and thinking of these kinds of things. And it’s why I believe it’s fine to be bored sometimes.

     
    • howardat58 3:33 pm on Wednesday, 20 April, 2016 Permalink | Reply

      Your point about how mathematicians think is so vital and so overlooked in the teaching of math in schools. Even the question “Is it true in a special case?” is a question rarely asked.

      Liked by 1 person

      • Joseph Nebus 2:13 am on Friday, 22 April, 2016 Permalink | Reply

        Well, thank you. While writing this I did get to thinking about how we find things that can be picked out without loss of generality, versus actually losing generality. And I didn’t think of a good example of losing generality, partly because I’m writing these much closer to deadline than I imagined and partly because I thought I was running long as it was.

        I might put a follow-up post on about how to pick examples, though.

        Liked by 2 people

  • Joseph Nebus 3:00 pm on Thursday, 14 April, 2016 Permalink | Reply
    Tags: , flash cards, , , logic, , , ,   

    Reading the Comics, April 10, 2016: Four-Digit Prime Number Edition 


    In today’s installment of Reading The Comics, mathematics gets name-dropped a bunch in strips that aren’t really about my favorite subject other than my love. Also, I reveal the big lie we’ve been fed about who drew the Henry comic strip attributed to Carl Anderson. Finally, I get a question from Queen Victoria. I feel like this should be the start of a podcast.

    Todd responds to arithmetic flash cards: 'Tater tots! Sloppy Joes! Mac and Cheese!' 'Todd, what are you doing? These are all math!' 'Sorry ... every day at school we have math right before lunch and you told me to say the first thing that pops into my mind!'

    Patrick Roberts’ Todd the Dinosaur for the 6th of April, 2016.

    Patrick Roberts’ Todd the Dinosaur for the 6th of April just name-drops mathematics. The flash cards suggest it. They’re almost iconic for learning arithmetic. I’ve seen flash cards for other subjects. But apart from learning the words of other languages I’ve never been able to make myself believe they’d work. On the other hand, I haven’t used flash cards to learn (or teach) things myself.

    Mom, taking the mathematics book away from Bad Dad: 'I'll take over now ... fractions and long division aren't `scientifically accepted as unknowable`.'

    Joe Martin’s Boffo for the 7th of April, 2016. I bet the link expires in early May.

    Joe Martin’s Boffo for the 7th of April is a solid giggle. (I have a pretty watery giggle myself.) There are unknowable, or at least unprovable, things in mathematics. Any logic system with enough rules to be interesting has ideas which would make sense, and which might be true, but which can’t be proven. Arithmetic is such a system. But just fractions and long division by itself? No, I think we need something more abstract for that.

    Henry is sent to bed. He can't sleep until he reads from his New Math text.

    Carl Anderson’s Henry for the 7th of April, 2016.

    Carl Anderson’s Henry for the 7th of April is, of course, a rerun. It’s also a rerun that gives away that the “Carl Anderson” credit is a lie. Anderson turned over drawing the comic strip in 1942 to John Liney, for weekday strips, and Don Trachte for Sundays. There is no possible way the phrase “New Math” appeared on the cover of a textbook Carl Anderson drew. Liney retired in 1979, and Jack Tippit took over until 1983. Then Dick Hodgins, Jr, drew the strip until 1990. So depending on how quickly word of the New Math penetrated Comic Strip Master Command, this was drawn by either Liney, Tippit, or possibly Hodgins. (Peanuts made New Math jokes in the 60s, but it does seem the older the comic strip the longer it takes to mention new stuff.) I don’t know when these reruns date from. I also don’t know why Comics Kingdom is fibbing about the artist. But then they went and cancelled The Katzenjammer Kids without telling anyone either.

    Eric the Circle for the 8th, this one by “lolz”, declares that Eric doesn’t like being graphed. This is your traditional sort of graph, one in which points with coordinates x and y are on the plot if their values make some equation true. For a circle, that equation’s something like (x – a)2 + (y – b)2 = r2. Here (a, b) are the coordinates for the point that’s the center of the circle, and r is the radius of the circle. This looks a lot like Eric is centered on the origin, the point with coordinates (0, 0). It’s a popular choice. Any center is as good. Another would just have equations that take longer to work with.

    Richard Thompson’s Cul de Sac rerun for the 10th is so much fun to look at that I’m including it even though it just name-drops mathematics. The joke would be the same if it were something besides fractions. Although see Boffo.

    Norm Feuti’s Gil rerun for the 10th takes on mathematics’ favorite group theory application, the Rubik’s Cube. It’s the way I solved them best. This approach falls outside the bounds of normal group theory, though.

    Mac King and Bill King’s Magic in a Minute for the 10th shows off a magic trick. It’s also a non-Rubik’s-cube problem in group theory. One of the groups that a mathematics major learns, after integers-mod-four and the like, is the permutation group. In this, the act of swapping two (or more) things is a thing. This puzzle restricts the allowed permutations down to swapping one item with the thing next to it. And thanks to that, an astounding result emerges. It’s worth figuring out why the trick would work. If you can figure out the reason the first set of switches have to leave a penny on the far right then you’ve got the gimmick solved.

    Pab Sungenis’s New Adventures of Queen Victoria for the 10th made me wonder just how many four-digit prime numbers there are. If I haven’t worked this out wrong, there’s 1,061 of them.

     
  • Joseph Nebus 3:00 pm on Monday, 14 March, 2016 Permalink | Reply
    Tags: , formal language, , , , logic,   

    A Leap Day 2016 Mathematics A To Z: Grammar 


    My next entry for this A To Z was another request, this one from Jacob Kanev, who doesn’t seem to have a WordPress or other blog. (If I’m mistaken, please, let me know.) Kanev’s given me several requests, some of them quite challenging. Some too challenging: I have to step back from describing “both context sensitive and not” kinds of grammar just now. I hope all will forgive me if I just introduce the base idea.

    Grammar.

    One of the ideals humans hold when writing a mathematical proof is to crush all humanity from the proof. It’s nothing personal. It reflects a desire to be certain we have proved things without letting any unstated assumptions or unnoticed biases interfering. The 19th century was a lousy century for mathematicians and their intuitions. Many ideas that seemed clear enough turned out to be paradoxical. It’s natural to want to not make those mistakes again. We can succeed.

    We can do this by stripping out everything but the essentials. We can even do away with words. After all, if I say something is a “square”, that suggests I mean what we mean by “square” in English. Our mathematics might not have proved all the square-ness of the thing. And so we reduce the universe to symbols. Letters will do as symbols, if we want to be kind to our typesetters. We do want to be kind now that, thanks to LaTeX, we do our own typesetting.

    This is called building a “formal language”. The “formal” here means “relating to the form” rather than “the way you address people when you can’t just say `heya, gang’.” A formal language has two important components. One is the symbols that can be operated on. The other is the operations you can do on the symbols.

    If we’ve set it all up correctly then we get something wonderful. We have “statements”. They’re strings of the various symbols. Some of the statements are axioms; they’re assumed to be true without proof. We can turn a statement into another one by using a statement we have and one of the operations. If the operation requires, we can add in something else we already know to be true. Something we’ve already proven.

    Any statement we build this way — starting from an axiom and building with the valid operations — is a new and true statement. It’s a theorem. The proof of the theorem? It’s the full sequence of symbols and operations that we’ve built. The line between advanced mathematics and magic is blurred. To give a theorem its full name is to give its proof. (And now you understand why the biographies of many of the pioneering logicians of the late 19th and early 20th centuries include a period of fascination with the Kabbalah and other forms of occult or gnostic mysticism.)

    A grammar is what’s required to describe a language like this. It’s defined to be a quartet of properties. The first property is the collection of symbols that can’t be the end of a statement. These are called nonterminal symbols. The second property is the collection of symbols that can end a statement. These are called terminal symbols. (You see why we want to have those as separate lists.) The third property is the collection of rules that let you build new statements from old. The fourth property is the collection of things we take to be true to start. We only have finitely many options for each of these, at least for your typical grammar. I imagine someone has experimented with infinite grammars. But that hasn’t got to be enough of a research field people have to pay attention to them. Not yet, anyway.

    Now it’s reasonable to ask if we need mathematicians at all. If building up theorems is just a matter of applying the finitely many rules of inference on finitely many collections of symbols, finitely many times over, then what about this can’t be done by computer? And done better by a computer, since a computer doesn’t need coffee, or bathroom breaks an hour later, or the hope of moving to a tenure-track position?

    Well, we do need mathematicians. I don’t say that just because I hope someone will give me money in exchange for doing mathematics. It’s because setting up a computer to just grind out every possible theorem will never turn up what you want to know now. There are several reasons for this.

    Here’s a way to see why. It’s drawn from Douglas Hofstadter’s Gödel, Escher, Bach, a copy of which you can find in any college dorm room or student organization office. At least you could back when I was an undergraduate. I don’t know what the kids today use.

    Anyway, this scheme has three nonterminal symbols: I, M, and U. As a terminal symbol … oh, let’s just use the space at the end of a string. That way everything looks like words. We will include a couple variables, lowercase letters like x and y and z. They stand for any string of nonterminal symbols. They’re falsework. They help us get work done, but must not appear in our final result.

    There’s four rules of inference. The first: if xI is valid, then so is xIM. The second: if Mx is valid, then so is Mxx. The third: if MxIIIy is valid, then so is MxUy. The fourth: if MxUUy is valid, then so is Mxy.

    We have one axiom, assumed without proof to be true: MI.

    So let’s putter around some. MI is true. So by the second rule, so is MII. That’s a theorem. And since MII is true, by the second rule again, so is MIIII. That’s another theorem. Since MIIII is true, by the first rule, so is MIIIIM. We’ve got another theorem already. Since MIIIIM is true, by the third rule, so is MIUM. We’ve got another theorem. For that matter, since MIIIIM is true, again by the third rule, so is MUIM. Would you like MIUMIUM? That’s waiting there to be proved too.

    And that will do. First question: what does any of this even mean? Nobody cares about whether MIUMIUM is a theorem in this system. Nobody cares about figuring out whether MUIUMUIUI might be a theorem. We care about questions like “what’s the smallest odd perfect number?” or “how many equally-strong vortices can be placed in a ring without the system becoming unstable?” With everything reduced to symbol-shuffling like this we’e safe from accidentally assuming something which isn’t justified. But we’re pretty far from understanding what these theorems even mean.

    In this case, these strings don’t mean anything. They’re a toy so we can get comfortable with the idea of building theorems this way. We don’t expect them to do any more work than we expect Lincoln Logs to build usable housing. But you can see how we’re starting pretty far from most interesting mathematics questions.

    Still, if we started from a system that meant something, we would get there in time, right? … Surely? …

    Well, maybe. The thing is, even with this I, M, U scheme and its four rules there are a lot of things to try out. From the first axiom, MI, we can produce either MII or MIM. From MII we can produce MIIM or MIIII. From MIIII we could produce MIIIIM, or MUI, or MIU, or MIIIIIIII. From each of those we can produce … quite a bit of stuff.

    All of those are theorems in this scheme and that’s nice. But it’s a lot. Suppose we have set up symbols and axioms and rules that have clear interpretations that relate to something we care about. If we set the computer to produce every possible legitimate result we are going to produce an enormous number of results that we don’t care about. They’re not wrong, they’re just off-point. And there’s a lot more true things that are off-point than there are true things on-point. We need something with judgement to pick out results that have anything to do with what we want to know. And trying out combinations to see if we can produce the pattern we want is hard. Really hard.

    And there’s worse. If we set up a formal language that matches real mathematics, then we need a lot of work to prove anything. Even simple statements can take forever. I seem to remember my logic professor needing 27 steps to work out the uncontroversial theorem “if x = y and y = z, then x = z”. (Granting he may have been taking the long way around for demonstration purposes.) We would have to look in theorems of unspeakably many symbols to find the good stuff.

    Now it’s reasonable to ask what the point of all this is. Why create a scheme that lets us find everything that can be proved, only to have all we’re interested in buried in garbage?

    There are some uses. To make us swear we’ve read Jorge Luis Borges, for one. Another is to study the theory of what we can prove. That is, what are we able to learn by logical deduction? And another is to design systems meant to let us solve particular kinds of problems. That approach makes the subject merge into computer science. Code for a computer is, in a sense, about how to change a string of data into another string of data. What are the legitimate data to start with? What are the rules by which to change the data? And these are the sorts of things grammars, and the study of grammars, are about.

     
    • Jacob Kanev 7:07 am on Tuesday, 15 March, 2016 Permalink | Reply

      A beautiful post, thank you; and very well explained. I remember our professor linking grammars and Turing machines with Church’s thesis, the fact that the brain is a deterministic machine, and Gödel’s theorem, to arrive at some pretty fundamental claims about perception and knowledge in general. Well, I guess every professor tries to sell their own subject as the most substantial of all. Although he was pretty successful with this one.

      Btw, I do have a wordpress blog: https://jacobkanev.wordpress.com/

      Like

      • Joseph Nebus 7:23 am on Wednesday, 16 March, 2016 Permalink | Reply

        I’m happy to be of service and glad that you liked the essay as it turned out.

        I’d agree with your professor in linking grammars to Turing machines and fundamental ideas about what knowledge we can have. Grammars are ways of describing what we can know about a system, and if we’re looking seriously into the subject that has to bring us to the decidability problems and the limits of knowledge. I’m less sure about perception, but I don’t know what case your professor made.

        And I’m glad for the blog link; thank you.

        Like

    • elkement (Elke Stangl) 7:45 am on Friday, 18 March, 2016 Permalink | Reply

      Great post! I was a die-hard Gödel-Escher-Bach fan :-) That book made my decision to difficult to choose between physics, math, or computer science.

      Like

  • Joseph Nebus 3:00 pm on Friday, 4 March, 2016 Permalink | Reply
    Tags: , , , , logic, ,   

    A Leap Day 2016 Mathematics A To Z: Conjecture 


    For today’s entry in the Leap Day 2016 Mathematics A To Z I have an actual request from from Elke Stangl. I’d had another ‘c’ request, for ‘continued fractions’. I’ve decided to address that by putting ‘Fractions, continued’ on the roster. If you have other requests, for letters not already committed, please let me know. I’ve got some letters I can use yet.

    Conjecture.

    An old joke says a mathematician’s job is to turn coffee into theorems. I prefer tea, which may be why I’m not employed as a mathematician. A theorem is a logical argument that starts from something known to be true. Or we might start from something assumed to be true, if we think the setup interesting and plausible. And it uses laws of logical inference to draw a conclusion that’s also true and, hopefully, interesting. If it isn’t interesting, maybe it’s useful. If it isn’t either, maybe at least the argument is clever.

    How does a mathematician know what theorems to try proving? We could assemble any combination of premises as the setup to a possible theorem. And we could imagine all sorts of possible conclusions. Most of them will be syntactically gibberish, the equivalent of our friends the monkeys banging away on keyboards. Of those that aren’t, most will be untrue, or at least impossible to argue. Of the rest, potential theorems that could be argued, many will be too long or too unfocused to follow. Only a tiny few potential combinations of premises and conclusions could form theorems of any value. How does a mathematician get a good idea where to spend her time?

    She gets it from experience. In learning what theorems, what arguments, have been true in the past she develops a feeling for things that would plausibly be true. In playing with mathematical constructs she notices patterns that seem to be true. As she gains expertise she gets a sense for things that feel right. And she gets a feel for what would be a reasonable set of premises to bundle together. And what kinds of conclusions probably follow from an argument that people can follow.

    This potential theorem, this thing that feels like it should be true, a conjecture.

    Properly, we don’t know whether a conjecture is true or false. The most we can say is that we don’t have evidence that it’s false. New information might show that we’re wrong and we would have to give up the conjecture. Finding new examples that it’s true might reinforce our idea that it’s true, but that doesn’t prove it’s true.

    For example, we have the Goldbach Conjecture. According to it every even number greater than two can be written as the sum of exactly two prime numbers. The evidence for it is very good: every even number we’ve tied has worked out, up through at least 4,000,000,000,000,000,000. But it isn’t proven. It’s possible that it’s impossible from the standard rules of arithmetic.

    That’s a famous conjecture. It’s frustrated mathematicians for centuries. It’s easy to understand and nobody’s found a proof. Famous conjectures, the ones that get names, tend to do that. They looked nice and simple and had hidden depths.

    Most conjectures aren’t so storied. They instead appear as notes at the end of a section in a journal article or a book chapter. Or they’re put on slides meant to refresh the audience’s interest where it’s needed. They are needed at the fifteen-minute park of a presentation, just after four slides full of dense equations. They are also needed at the 35-minute mark, in the middle of a field of plots with too many symbols and not enough labels. And one’s needed just before the summary of the talk, so that the audience can try to remember what the presentation was about and why they thought they could understand it. If the deadline were not so tight, if the conference were a month or so later, perhaps the mathematician would find a proof for these conjectures.

    Perhaps. As above, some conjectures turn out to be hard. Fermat’s Last Theorem stood for four centuries as a conjecture. Its first proof turned out to be nothing like anything Fermat could have had in mind. Mathematics popularizers lost an easy hook when that was proven. We used to be able to start an essay on Fermat’s Last Theorem by huffing about how it was properly a conjecture but the wrong term stuck to it because English is a perverse language. Now we have to start by saying how it used to be a conjecture instead.

    But few are like that. Most conjectures are ideas that feel like they ought to be true. They appear because a curious mind will look for new ideas that resemble old ones, or will notice patterns that seem to resemble old patterns.

    And sometimes conjectures turn out to be false. Something can look like it ought to be true, or maybe would be true, and yet be false. Often we can prove something isn’t true by finding an example, just as you might expect. But that doesn’t mean it’s easy. Here’s a false conjecture, one that was put forth by Goldbach. All odd numbers are either prime, or can be written as the sum of a prime and twice a square number. (He considered 1 to be a prime number.) It’s not true, but it took over a century to show that. If you want to find a counterexample go ahead and have fun trying.

    Still, if a mathematician turns coffee into theorems, it is through the step of finding conjectures, promising little paths in the forest of what is not yet known.

     
    • elkement (Elke Stangl) 9:38 pm on Friday, 4 March, 2016 Permalink | Reply

      Thanks :-) So you say that experts’ intuition that might look like magic to laymen is actually pattern recognition, correct? (I think I have read about this in pop-sci psychology books) And if an unproven theorem passes the pattern recognition filter it is promoted to conjecture.

      Like

      • Joseph Nebus 7:27 am on Wednesday, 9 March, 2016 Permalink | Reply

        I think that there is a large aspect of it that’s pattern recognition, yes. But some of that may be that we look for things that resemble what’s already worked. So, like, if we already have a theorem about how a sequence of real-valued functions converges to a new real-valued function, then it’s natural to think about variants. Can we say something about sequences of complex-valued functions? If the original theorem demanded functions that were continuous and had infinitely many derivatives, can we loosen that to a function that’s continuous and has only finitely many derivatives? Can we lose the requirement that there be derivatives and still say something?

        I realized at one point while taking real analysis in grad school that many of the theorems we were moving into looked a lot like what we already had with one or two variations, and could sometimes write out the next theorem almost by rote. There is certainly a kind of pattern recognition at work here, though sometimes it can feel like playing with the variations on a theme.

        Liked by 1 person

        • elkement (Elke Stangl) 7:37 am on Wednesday, 9 March, 2016 Permalink | Reply

          Yes, I agree – I meant pattern recognition in exactly this way, in a very broad way … searching for a similar pattern in your own experiences, among things you have encountered and that worked. I was thinking in general terms and comparing to other skills and expertise, like what makes you successful in any kind of tech troubleshooting. It seems that you have an intuitive feeling about what may work but actually you draw on related scenarios or aspects of scenarios we had solved.

          Like

    • Pen & Shutter 1:09 pm on Saturday, 5 March, 2016 Permalink | Reply

      I understood all that! I definitely deserve a prize … I am no mathematician … And I enjoyed every word! I love your use of English.

      Like

    • davekingsbury 3:25 pm on Saturday, 5 March, 2016 Permalink | Reply

      If you’ve nothing for Q, what about Quadratic Equations … though I start twitching whenever I think about them!

      Like

      • Joseph Nebus 7:43 am on Wednesday, 9 March, 2016 Permalink | Reply

        I’m sorry to say Q already got claimed, by ‘quaternion’. But P got ‘polynomial’, which should be close enough to quadratic equations that there’s at least some help there.

        Liked by 1 person

  • Joseph Nebus 3:00 pm on Monday, 29 February, 2016 Permalink | Reply
    Tags: , axioms, , , logic, reality   

    A Leap Day 2016 Mathematics A To Z: Axiom 


    I had a great deal of fun last summer with an A To Z glossary of mathematics terms. To repeat a trick with some variation, I called for requests a couple weeks back. I think the requests have settled down so let me start. (However, if you’ve got a request for one of the latter alphabet letters, please let me know. There’s ten letters not yet committed.) I’m going to call this a Leap Day 2016 Mathematics A To Z to mark when it sets off. This way I’m not committed to wrapping things up before a particular season ends. On, now, to the start and the first request, this one from Elke Stangl:

    Axiom.

    Mathematics is built of arguments. Ideally, these are all grounded in deductive logic. These would be arguments that start from things we know to be true, and use the laws of logical inference to conclude other things that are true. We want valid arguments, ones in which every implication is based on true premises and correct inferences. In practice we accept some looseness about this, because it would just take forever to justify every single little step. But the structure is there. From some things we know to be true, deduce something we hadn’t before proven was true.

    But where do we get things we know to be true? Well, we could ask the philosophy department. The question’s one of their specialties. But we might be scared of them, and they of us. After all, the mathematics department and the philosophy department are only usually both put in the College of Arts and Sciences. Sometimes philosophy is put in the College of Humanities instead. Let’s stay where we were instead.

    We know to be true stuff we’ve already proved to be true. So we can use the results of arguments we’ve already finished. That’s comforting. Whatever work we, or our forerunners, have done was not in vain. But how did we know those results were true? Maybe they were the consequences of earlier stuff we knew to be true. Maybe they came from earlier valid arguments.

    You see the regression problem. We don’t have anything we know to be true except the results of arguments, and the arguments depended on having something true to build from. We need to start somewhere.

    The real world turns out to be a poor starting point, by the way. Oh, it’s got some good sides. Reality is useful in many ways, but it has a lot of problems to be resolved. Most things we could say about the real world are transitory: they were once untrue, became true, and will someday be false again. It’s hard to see how you can build a universal truth on a transitory foundation. And that’s even if we know what’s true in the real world. We have senses that seem to tell us things about the real world. But the philosophy department, if we eavesdrop on them, would remind us of some dreadful implications. The concept of “the real world” is hard to make precise. Even if we suppose we’ve done that, we don’t know that what we could perceive has anything to do with the real world. The folks in the psychology department and the people who study physiology reinforce the direness of the situation. Even if perceptions can tell us something relevant, and even if our senses aren’t deliberately deceived, they’re still bad at perceiving stuff. We need to start somewhere else if we want certainty.

    That somewhere is the axiom. We declare some things to be a kind of basic law. Here are some thing we need not prove true; they simply are.

    (Sometimes mathematicians say “postulate” instead of “axiom”. This is because some things sound better called “postulates”. Meanwhile other things sound better called “axioms”. There is no functional difference.)

    Most axioms tend to be straightforward things. We tend to like having uncontroversial foundations for our arguments. It may hardly seem necessary to say “all right angles are congruent”, but how would you prove that? It may seem obvious that, given a collection of sets of things, it’s possible to select exactly one thing from each of those sets. How do you know you can?

    Well, they might follow from some other axioms, by some clever enough argument. This is possible. Mathematicians consider it elegant to have as few axioms as necessary for their work. (They’re not alone, or rare, in that preference.) I think that reflects a cultural desire to say as much as possible with as little work as possible. The more things we have to assume to show a thing is true, the more likely that in a new application one of those assumptions won’t hold. And that would spoil our knowledge of that conclusion. Sometimes we can show the interesting point of one axiom could be derived from some other axiom or axioms. We might replace an axiom with these alternates if that gives us more enlightening arguments.

    Sometimes people seize on this whole axiom business to argue that mathematics (and science, dragged along behind) is a kind of religion. After all, you need to have faith that some things are true. This strikes me as bad theology and poor mathematics. The most obvious difference between an article of faith and an axiom must be that axioms are voluntary. They are things you assume to be true because you expect them to enlighten something you wish to study. If they don’t, you’re free to try other axioms.

    The axiom I mentioned three paragraphs back, about selecting exactly one thing from each of a collection of sets? That’s known as the Axiom of Choice. It’s used in the theory of sets. But you don’t have to assume it’s true. Much of set theory stands independent of it. Many set theorists go about their work neither committing to the idea that it’s true or that it’s false.

    What makes a good set of axioms is rather like what makes a good set of rules for a sport. You do want to have a set that’s reasonably clear. You want them to provide for many interesting consequences. You want them to not have any contradictions. (You settle for them having no contradictions anyone’s found or suspects.) You want them to have as few ambiguities as possible. What makes up that set may evolve as the field, or as the sport, evolves. People do things that weren’t originally thought about. People get more experience and more perspective on the way the rules are laid out. People notice they had been assuming something without stating it. We revise and, we hope, improve the foundations with time.

    There’s no guarantee that every set of axioms will produce something interesting. Well, you wouldn’t expect to necessarily get a playable game by throwing together some random collection of rules from several different sports, either. Most mathematicians stick to familiar groups of axioms, for the same reason most athletes stick to sports they didn’t make up. We know from long experience that this set will give us an interesting geometry, or calculus, or topology, or so on.

    There’ll never be a standard universal set of axioms covering all mathematics. There are different sets of axioms that directly contradict each other but that are, to the best of our knowledge, internally self-consistent. The axioms that describe geometry on a flat surface, like a map, are inconsistent with those that describe geometry on a curved surface, like a globe. We need both maps and globes. So we have both flat and curved geometries, and we decide what kind fits the work we want to do.

    And there’ll never be a complete list of axioms for any interesting field, either. One of the unsettling discoveries of 20th Century logic was of incompleteness. Any set of axioms interesting enough to cover the ability to do arithmetic will have statements that would be meaningful, but that can’t be proven true or false. We might add some of these undecidable things to the set of axioms, if they seem useful. But we’ll always have other things not provably true or provably false.

     
    • gaurish 3:30 pm on Monday, 29 February, 2016 Permalink | Reply

      Amazing explanation :)

      Like

    • howardat58 5:33 pm on Monday, 29 February, 2016 Permalink | Reply

      It is difficult to believe that none of this geometry stuff existed before Euclid. His contribution was to show that an abstract system based on some reasonable axioms, those which matched practical experience, could be constructed and from which all the results and conclusions would follow, WITHOUT the use of pictures and hand-waving. Euclid’s definition of a line, “That which has no breadth”, makes it impossible to draw one !!! Nobody attempted to do this to even the natural numbers until Peano and others in 1900-1909
      https://en.wikipedia.org/wiki/Peano_axioms
      (worth a read)

      Like

      • Joseph Nebus 8:50 pm on Tuesday, 1 March, 2016 Permalink | Reply

        I don’t mean to suggest I think geometry started with Euclid. I’d be surprised if it turned out Euclid were even the first Ancient Greek to have a system which we’d recognize as organized and logically rigorous geometry. But the record of evidence is scattered, and Euclid did do so very well that he must have obliterated his precursors. It’s got to be something like how The Jazz Singer obliterates memory of the synchronized-sound movies made before then.

        The problems with definitions does point out something true about axioms. The obvious stuff, like what we mean by a line, is often extremely hard to explain. Perhaps it’s because the desire to explain terms using only simpler terms leaves us without the vocabulary or even the concepts to do work. Perhaps it’s that the most familiar things carry with them so many connotations and unstated assumptions we don’t know how to separate them out again.

        Peano axioms are a great read, yes. I’m a bit sad my undergraduate training in mathematics never gave me reason to study them directly; we were preparing for other things.

        Like

    • elkement (Elke Stangl) 7:19 am on Tuesday, 1 March, 2016 Permalink | Reply

      Thanks for the mention, but Axiom Fame should go to Christopher Adamson. He suggested Axiom and I suggested Conjecture in the Requests comment thread :-)

      Like

  • Joseph Nebus 3:00 pm on Monday, 11 January, 2016 Permalink | Reply
    Tags: , , logic,   

    Reading the Comics, January 8, 2015: Rerun-Heavy Edition 


    I couldn’t think of what connective theme there might be to the mathematically-themed comic strips of the last couple days. It finally struck me: there’s a lot of reruns in this. That’ll do. Most of them are reruns from before I started writing about comics so much in these parts.

    Bill Watterson’s Calvin and Hobbes for the 5th of January (a rerun, of course, from the 7th of January, 1986) is a kid-resisting-the-test joke. The particular form is trying to claim a religious exemption from mathematics tests. I sometimes see attempts to claim that mathematics is a kind of religion since, after all, you have to believe it’s true. I’ll grant that you do have to assume some things without proof. Those are the rules of logical inference, and the axioms of the field, particularly. But I can’t make myself buy a definition of “religion” that’s just “something you believe”.

    But there are religious overtones to a lot of mathematics. The field promises knowable universal truths, things that are true regardless of who and in what context might know them. And the study of mathematical infinity seems to inspire thoughts of God. Amir D Aczel’s The Mystery Of The Aleph: Mathematics, The Kabbala, and the Search for Infinity is a good read on the topic. Addition is still not a kind of religion, though.

    'My second boyfriend has a brain as big as a large seedless watermelon.' 'Robert, what is the square root of 2,647,129?' '1627 and how do you get ink stains out of your shirt pocket?'

    Bud Grace’s The Piranha Club for the 6th of January, 2015.

    Bud Grace’s The Piranha Club for the 6th of January uses the ability to do arithmetic as proof of intelligence. It’s a kind of intelligence, sure. There’s fun to be had in working out a square root in your head, or on paper. But there’s really no need for it now that we’ve got calculator technology, except for what it teaches you about how to compute.

    Ruben Bolling’s Super-Fun-Pak Comix for the 6th of June is an installment of A Voice From Another Dimension. It’s just what the title suggests, and of course it would have to be a three-panel comic. The idea that creatures could live in more, or fewer, dimensions of space is a captivating one. It’s challenging to figure how it could work, though. Spaces of one or two dimensions don’t seem like they would allow biochemistry to work. And, as I understand it, chemistry itself seems unlikely to work right in four or more dimensions of space too. But it’s still fun to think about.

    David L Hoyt and Jeff Knurek’s Jumble for the 7th of January is a counting-number joke. It does encourage asking whether numbers are created or discovered, which is a tough question. Counting numbers like “four” are so familiar and so apparently universal that they don’t seem to be constructs. (Even if they are, animals have an understanding of at least small counting numbers like these.) But if “four” is somehow not a human construct, then what about “4,000, 000,000, 000,000, 000,000, 000,000, 000,000”, a number so large it’s hard to think of something we have that many of that we can visualize. And even if that is, “one fourth” seems a bit different from that, and “four i” — the number which, squared, gives us negative 16 — seems qualitatively different. But if they’re constructs, then why do they correspond well to things we can see in the real world?

    LIHYL (O O - - -), RUCYR (O - - O -), AMDTEN (O - - O O -), GAULEE (- O - O - O). The number that equals four plus four didn't exist until it was `(- - -) (- - - - -) (- -)'. Dashes between the parentheses in that last answer because it's some wordplay there.

    David L Hoyt and Jeff Knurek’s Jumble for the 7th of January, 2016. The link will likely expire around mid-February.

    Greg Curfman’s Meg Classics for the 7th of January originally ran the 19th of September, 1997. It’s about a kid distractingly interested in multiplication. You get these sometimes. My natural instinct is to put the bigger number first and the smaller number second in a multiplication. “2 times 27” makes me feel nervous in a way “27 times 2” never will.

    Hector D Cantu and Carlos Castellanos’s Baldo for the 8th of January is a rerun from 2011. It’s an old arithmetic joke. I wouldn’t be surprised if George Burns and Gracie Allen did it. (Well, a little surprised. Gracie Allen didn’t tend to play quite that kind of dumb. But everybody tells some jokes that are a little out of character.)

     
  • Joseph Nebus 3:00 pm on Thursday, 31 December, 2015 Permalink | Reply
    Tags: , , logic, ,   

    Reading the Comics, December 30, 2015: Seeing Out The Year Edition 


    There’s just enough comic strips with mathematical themes that I feel comfortable doing a last Reading the Comics post for 2015. And as maybe fits that slow week between Christmas and New Year’s, there’s not a lot of deep stuff to write about. But there is a Jumble puzzle.

    Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips gives us someone so wrapped up in measuring data as to not notice the obvious. The obvious, though, isn’t always right. This is why statistics is a deep and useful field. It’s why measurement is a powerful tool. Careful measurement and statistical tools give us ways to not fool ourselves. But it takes a lot of sampling, a lot of study, to give those tools power. It can be easy to get lost in the problems of gathering data. Plus numbers have this hypnotic power over human minds. I understand Lard’s problem.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 27th of December messes with a kid’s head about the way we know 1 + 1 equals 2. The classic Principia Mathematica construction builds it out of pure logic. We come up with an idea that we call “one”, and another that we call “plus one”, and an idea we call “two”. If we don’t do anything weird with “equals”, then it follows that “one plus one equals two” must be true. But does the logic mean anything to the real world? Or might we be setting up a game with no relation to anything observable? The punchy way I learned this question was “one cup of popcorn added to one cup of water doesn’t give you two cups of soggy popcorn”. So why should the logical rules that say “one plus one equals two” tell us anything we might want to know about how many apples one has?

    Words: LIHWE, CAQUK, COYKJE, TALAFO. Unscramble to - - - O O, O O - - -, - - - - - O, - - O - O -, and solve the puzzle: 'The math teacher liked teaching addition and subtraction - - - - - - -'.

    David L Hoyt and Jeff Knurek’s Jumble for the 28th of December, 2015. The link will probably expire in late January 2016.

    David L Hoyt and Jeff Knurek’s Jumble for the 28th of December features a mathematics teacher. That’s enough to include here. (You might have an easier time getting the third and fourth words if you reason what the surprise-answer word must be. You can use that to reverse-engineer what letters have to be in the circles.)

    Richard Thompson’s Richard’s Poor Almanac for the 28th of December repeats the Platonic Fir Christmas Tree joke. It’s in color this time. Does the color add to the perfection of the tree, or take away from it? I don’t know how to judge.

    A butterfly tells another 'you *should* feel guilty --- the flutter of your wings ended up causing a hurricane that claimed thousands of lives!'

    Rina Piccolo filling in for Hilary Price on Rhymes With Orange for the 29th of December, 2015. It’s a small thing but I always like the dog looking up in the title panel for Cartoonist Showcase weeks.

    Hilary Price’s Rhymes With Orange for the 29th of December gives its panel over to Rina Piccolo. Price often has guest-cartoonist weeks, which is a generous use of her space. Piccolo already has one and a sixth strips — she’s one of the Six Chix cartoonists, and also draws the charming Tina’s Groove — but what the heck. Anyway, this is a comic strip about the butterfly effect. That’s the strangeness by which a deterministic system can still be unpredictable. This counter-intuitive conclusion dates back to the 1890s, when Henri Poincaré was trying to solve the big planetary mechanics question. That question is: is the solar system stable? Is the Earth going to remain in about its present orbit indefinitely far into the future? Or might the accumulated perturbations from Jupiter and the lesser planets someday pitch it out of the solar system? Or, less likely, into the Sun? And the sad truth is, the best we can say is we can’t tell.

    In Brian Anderson’s Dog Eat Doug for the 30th of December, Sophie ponders some deep questions. Most of them are purely philosophical questions and outside my competence. “What are numbers?” is also a philosophical question, but it feels like something a mathematician ought to have a position on. I’m not sure I can offer a good one, though. Numbers seem to be to be these things which we imagine. They have some properties and that obey certain rules when we combine them with other numbers. The most familiar of these numbers and properties correspond with some intuition many animals have about discrete objects. Many times over we’ve expanded the idea of what kinds of things might be numbers without losing the sense of how numbers can interact, somehow. And those expansions have generally been useful. They strangely match things we would like to know about the real world. And we can discover truths about these numbers and these relations that don’t seem to be obviously built into the definitions. It’s almost as if the numbers were real objects with the capacity to surprise and to hold secrets.

    Why should that be? The lazy answer is that if we came up with a construct that didn’t tell us anything interesting about the real world, we wouldn’t bother studying it. A truly irrelevant concept would be a couple forgotten papers tucked away in an unread journal. But that is missing the point. It’s like answering “why is there something rather than nothing” with “because if there were nothing we wouldn’t be here to ask the question”. That doesn’t satisfy. Why should it be possible to take some ideas about quantity that ravens, raccoons, and chimpanzees have, then abstract some concepts like “counting” and “addition” and “multiplication” from that, and then modify those concepts, and finally have the modification be anything we can see reflected in the real world? There is a mystery here. I can’t fault Sophie for not having an answer.

     
  • Joseph Nebus 3:00 pm on Saturday, 31 October, 2015 Permalink | Reply
    Tags: , , , logic, , , ,   

    Reading the Comics, October 29, 2015: Spherical Squirrel Edition 


    John Zakour and Scott Roberts’s Maria’s Day is going to Sunday-only publication. A shame, but I understand Zakour and Roberts choosing to focus their energies on better-paying venues. That those venues are “writing science fiction novels” says terrifying things about the economic logic of web comics.

    This installment, from the 23rd, is a variation on the joke about the lawyer, or accountant, or consultant, or economist, who carefully asks “what do you want the answer to be?” before giving it. Sports are a rich mine of numbers, though. Mostly they’re statistics, and we might wonder: why does anyone care about sports statistics? Once the score of a game is done counted, what else matters? A sociologist and a sports historian are probably needed to give true, credible answers. My suspicion is that it amounts to money, as it ever does. If one wants to gamble on the outcomes of sporting events, one has to have a good understanding of what is likely to happen, and how likely it is to happen. And I suppose if one wants to manage a sporting event, one wants to spend money and time and other resources to best effect. That requires data, and that we see in numbers. And there are so many things that can be counted in any athletic event, aren’t there? All those numbers carry with them a hypnotic pull.

    In Darrin Bell’s Candorville for the 24th of October, Lemont mourns how he’s forgotten how to do long division. It’s an easy thing to forget. For one, we have calculators, as Clyde points out. For another, long division ultimately requires we guess at and then try to improve an answer. It can’t be reduced to an operation that will never require back-tracking and trying some part of it again. That back-tracking — say, trying to put 28 into the number seven times, and finding it actually goes at least eight times — feels like a mistake. It feels like the sort of thing a real mathematician would never do.

    And that’s completely wrong. Trying an answer, and finding it’s not quite right, and improving on it is perfectly sound mathematics. Arguably it’s the whole field of numerical mathematics. Perhaps students would find long division less haunting if they were assured that it is fine to get a wrong-but-close answer as long as you make it better.

    John Graziano’s Ripley’s Believe It or Not for the 25th of October talks about the Rubik’s Cube, and all the ways it can be configured. I grant it sounds like 43,252,003,274,489,856,000 is a bit high a count of possible combinations. But it is about what I hear from proper mathematics texts, the ones that talk about group theory, so let’s let it pass.

    The Rubik’s Cube gets talked about in group theory, the study of things that work kind of like arithmetic. In this case, turning one of the faces — well, one of the thirds of a face — clockwise or counterclockwise by 90 degrees, so the whole thing stays a cube, works like adding or subtracting one, modulo 4. That is, we pretend the only numbers are 0, 1, 2, and 3, and the numbers wrap around. 3 plus 1 is 0; 3 plus 2 is 1. 1 minus 2 is 3; 1 minus 3 is 2. There are several separate rotations that can be done, each turning a third of each face of the cube. That each face of the cube starts a different color means it’s easy to see how these different rotations interact and create different color patterns. And rotations look easy to understand. We can at least imagine rotating most anything. In the Rubik’s Cube we can look at a lot of abstract mathematics in a handheld and friendly-looking package. It’s a neat thing.

    Scott Hilburn’s The Argyle Sweater for the 26th of October is really a physics joke. But it uses (gibberish) mathematics as the signifier of “a fully thought-out theory” and that’s good enough for me. Also the talk of a “big boing” made me giggle and I hope it does you too.

    Izzy Ehnes’s The Best Medicine Cartoon makes, I believe, its debut for Reading the Comics posts with its entry for the 26th. It’s also the anthropomorphic-numerals joke for the week.

    Frank Page’s Bob the Squirrel is struggling under his winter fur this week. On the 27th Bob tries to work out the Newtonian forces involved in rolling about in his condition. And this gives me the chance to share a traditional mathematicians joke and a cliche punchline.

    The story goes that a dairy farmer knew he could be milking his cows better. He could surely get more milk, and faster, if only the operations of his farm were arranged better. So he hired a mathematician, to find the optimal way to configure everything. The mathematician toured every part of the pastures, the milking barn, the cows, everything relevant. And then the mathematician set to work devising a plan for the most efficient possible cow-milking operation. The mathematician declared, “First, assume a spherical cow.”

    The punch line has become a traditional joke in the mathematics and science fields. As a joke it comments on the folkloric disconnection between mathematicians and practicality. It also comments on the absurd assumptions that mathematicians and scientists will make for the sake of producing a model, and for getting an answer.

    The joke within the joke is that it’s actually fine to make absurd assumptions. We do it all the time. All models are simplifications of the real world, tossing away things that may be important to the people involved but that just complicate the work we mean to do. We may assume cows are spherical because that reflects, in a not too complicated way, that while they might choose to get near one another they will also, given the chance, leave one another some space. We may pretend a fluid has no viscosity, because we are interested in cases where the viscosity does not affect the behavior much. We may pretend people are fully aware of the costs, risks, and benefits of any action they wish to take, at least when they are trying to decide which route to take to work today.

    That an assumption is ridiculous does not mean the work built on it is ridiculous. We must defend why we expect those assumptions to make our work practical without introducing too much error. We must test whether the conclusions drawn from the assumption reflect what we wanted to model reasonably well. We can still learn something from a spherical cow. Or a spherical squirrel, if that’s the case.

    Keith Tutt and Daniel Saunders’s Lard’s World Peace Tips for the 28th of October is a binary numbers joke. It’s the other way to tell the joke about there being 10 kinds of people in the world. (I notice that joke made in the comments on Gocomics.com. That was inevitable.)

    Eric the Circle for the 29th of October, this one by “Gilly” again, jokes about mathematics being treated as if quite subject to law. The truth of mathematical facts isn’t subject to law, of course. But the use of mathematics is. It’s obvious, for example, in the setting of educational standards. What things a member of society must know to be a functioning part of it are, western civilization has decided, a subject governments may speak about. Thus what mathematics everyone should know is a subject of legislation, or at least legislation in the attenuated form of regulated standards.

    But mathematics is subject to parliament (or congress, or the diet, or what have you) in subtler ways. Mathematics is how we measure debt, that great force holding society together. And measurement again has been (at least in western civilization) a matter for governments. We accept the principle that a government may establish a fundamental unit of weight or fundamental unit of distance. So too may it decide what is a unit of currency, and into how many pieces the unit may be divided. And from this it can decide how to calculate with that currency: if the “proper” price of a thing would be, say, five-ninths of the smallest available bit of currency, then what should the buyer give the seller?

    Who cares, you might ask, and fairly enough. I can’t get worked up about the risk that I might overpay four-ninths of a penny for something, nor feel bad that I might cheat a merchant out of five-ninths of a penny. But consider: when Arabic numerals first made their way to the west they were viewed with suspicion. Everyone at the market or the moneylenders’ knew how Roman numerals worked, and could follow addition and subtraction with ease. Multiplication was harder, but it could be followed. Division was a diaster and I wouldn’t swear that anyone has ever successfully divided using Roman numerals, but at least everything else was nice and familiar.

    But then suddenly there was this influx of new symbols, only one of them something that had ever been a number before. One of them at least looked like the letter O, but it was supposed to represent a missing quantity. And every calculation on this was some strange gibberish where one unfamiliar symbol plus another unfamiliar symbol turned into yet another unfamiliar symbol or maybe even two symbols. Sure, the merchant or the moneylender said it was easier, once you learned the system. But they were also the only ones who understood the system, and the ones who would profit by making “errors” that could not be detected.

    Thus we see governments, even in worldly, trade-friendly city-states like Venice, prohibiting the use of Arabic numerals. Roman numerals may be inferior by every measure, but they were familiar. They stood at least until enough generations passed that the average person could feel “1 + 1 = 2” contained no trickery.

    If one sees in this parallels to the problem of reforming mathematics education, all I can offer is that people are absurd, and we must love the absurdness of them.

    One last note, so I can get this essay above two thousand words somehow. In the 1910s Alfred North Whitehead and Bertrand Russell published the awesome and menacing Principia Mathematica. This was a project to build arithmetic, and all mathematics, on sound logical grounds utterly divorced from the great but fallible resource of human intuition. They did probably as well as human beings possibly could. They used a bewildering array of symbols and such a high level of abstraction that a needy science fiction movie could put up any random page of the text and pass it off as Ancient High Martian.

    But they were mathematicians and philosophers, and so could not avoid a few wry jokes, and one of them comes in Volume II, around page 86 (it’ll depend on the edition you use). There, in Proposition 110.643, Whitehead and Russell establish “1 + 1 = 2” and remark, “the above proposition is occasionally useful”. They note at least three uses in their text alone. (Of course this took so long because they were building a lot of machinery before getting to mere work like this.)

    Back in my days as a graduate student I thought it would be funny to put up a mock political flyer, demanding people say “NO ON PROP *110.643”. I was wrong. But the joke is strong enough if you don’t go to the trouble of making up the sign. I didn’t make up the sign anyway.

    And to murder my own weak joke: arguably “1 + 1 = 2” is established much earlier, around page 380 of the first volume, in proposition *54.43. The thing is, that proposition warns that “it will follow, when mathematical addition has been defined”, which it hasn’t been at that point. But if you want to say it’s Proposition *54.43 instead go ahead; it will not get you any better laugh.

    If you’d like to see either proof rendered as non-head-crushingly as possible, the Metamath Proof Explorer shows the reasoning for Proposition *54.43 as well as that for *110.643. And it contains hyperlinks so that you can try to understand the exact chain of reasoning which comes to that point. Good luck. I come from a mathematical heritage that looks at the Principia Mathematica and steps backward, quickly, before it has the chance to notice us and attack.

     
    • BunKaryudo 5:51 am on Monday, 2 November, 2015 Permalink | Reply

      I must admit, I thought the spherical cow joke was pretty funny. It’s also true, though, that as almost certainly the least mathematically gifted of your readers, it hadn’t occurred to me until I read a bit further that spherical cows might actually be a useful abstraction for certain types of problem. It might also make life easier for farmers since they could roll them back to their byres.

      Like

      • Joseph Nebus 1:13 am on Friday, 6 November, 2015 Permalink | Reply

        I’m sorry to have had your comment hidden a while. WordPress thought it might be spam and I failed to check sooner. I guess it doesn’t understand why spherical cows might be talked about so much.

        Still, yes, in many ways cows could be made easier to work with if they were much more spherical. Even an ellipsoidal cow would offer some advantages.

        Liked by 1 person

        • BunKaryudo 1:09 pm on Friday, 6 November, 2015 Permalink | Reply

          It’s true. Those stubby little cow legs just get in the way.

          Incidentally, don’t worry about my comment going missing for a while. My comments quite often seem to be mistaken for spam by WordPress. Perhaps I should stop wearing the grey trenchcoat, sunglasses and false mustache.

          Like

    • elkement (Elke Stangl) 8:06 am on Wednesday, 18 November, 2015 Permalink | Reply

      My favorite is of course your awesome find of humor in Principia Mathematica :-)

      Like

      • Joseph Nebus 4:08 am on Friday, 20 November, 2015 Permalink | Reply

        I’m so glad you like. I had thought the ‘occasionally useful’ proposition the most famous bit of the Principia Mathematica, but then I suppose ‘most famous’ doesn’t actually mean anyone’s heard of it.

        Liked by 1 person

  • Joseph Nebus 12:00 pm on Sunday, 25 October, 2015 Permalink | Reply
    Tags: , logic, , , ,   

    Reading the Comics, October 22, 2015: Foundations Edition 


    I am, yes, saddened to hear that Apartment 3-G is apparently shuffling off to a farm upstate. There it will be visited by a horrifying kangaroo-deer-fox-demon. And an endless series of shots of two talking heads saying they should go outside, when they’re already outside. But there are still many comic strips running, on Gocomics.com and on Comics Kingdom. They’ll continue to get into mathematically themed subjects. And best of all I can use a Popeye strip to talk about the logical foundations of mathematics and what computers can do for them.

    Jef Mallett’s Frazz for the 18th of October carries on the strange vendetta against “showing your work”. If you do read through the blackboard-of-text you’ll get some fun little jokes. I like the explanation of how “obscure calculus symbols” could be used, “And a Venn diagram!” Physics majors might notice the graph on the center-right, to the right of the DNA strand. That could show many things, but the one most plausible to me is a plot of the velocity and the position of an object undergoing simple harmonic motion.

    Still, I do wonder what work Caulfield would show if the problem were to say what fraction were green apples, if there were 57 green and 912 red apples. There are levels where “well, duh” will not cut it. In case “well, duh” does cut it, then a mathematician might say the answer is “obvious”. But she may want to avoid the word “obvious”, which has a history of being dangerously flexible. She might then say “by inspection”. That means, basically, look at it and yeah, of course that’s right.

    Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 18th of October uses mathematics as the quick way to establish “really smart”. It doesn’t take many symbols this time around, curiously. Superstar Equation E = mc2 appears in a misquoted form. At first that seems obvious, since if there were an equals sign in the denominator the whole expression would not parse. Then, though, you notice: if E and m and c mean what they usually do in the Superstar Equation, then, “E – mc2” is equal to zero. It shouldn’t be in the denominator anyway. So, the big guy has to be the egghead.

    Peter Maresca’s Origins of the Sunday Comics for the 18th of October reprints one of Windsor McCay’s Dreams of the Rarebit Fiend strips. As normal for Dreams and so much of McCay’s best work, it’s a dream-to-nightmare strip. And this one gives a wonderful abundance of numerals, and the odd letter, to play with. Mathematical? Maybe not. But it is so merrily playful it’d be a shame not to include.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th of October is a joke soundly in set theory. It also feels like it’s playing with a set-theory-paradox problem but I can’t pin down which one exactly. It feels most like the paradox of “find the smallest uninteresting counting number”. But being the smallest uninteresting counting number would be an interesting property to have. So any candidate number has to count as interesting. It also feels like it’s circling around the heap paradox. Take a heap of sand and remove one grain, and you still have a heap of sand. But if you keep doing that, at some point, you have just one piece of sand from the original pile, and that is no heap. When does the heap move away?

    Daniel Shelton’s Ben for the 21st of October is a teaching-arithmetic problem, using jellybeans. And fractions. Well, real objects can do wonders in connecting a mathematical abstraction to something one has an intuition for. One just has to avoid unwanted connotations and punching.

    Doug Savage’s Savage Chickens for the 21st of October uses “mathematics homework” as the emblem of the hardest kind of homework there might ever be. I saw the punch line coming a long while off, but still laughed.

    The Sea Hag is dismissive of scientists, who try to take credit for magic even though 'they can't even THINK! They have to use machines to tell them what two plus two is!! And another machine to prove the first one was right!'

    Bud Sagendorf’s Popeye for the 22nd of October, 2015. The strip originally ran sometime in 1981. This would be only a few years after the Four-Color Theorem was solved by computer. The computer did this by trying out all the possibilities and reporting everything was OK.

    Bud Sagendorf’s Popeye began what it billed as a new story, “Science Vs Sorcery”, on Monday the 19th. I believe it’s properly a continuation of the previous story, though, “Back-Room Pest!” which began the 13th of July. “Back-Room Pest!”, according to my records, originally ran from the 27th of July, 1981, through to the 23rd of January, 1982. So there’s obviously time missing. And this story, like “Back-Room Pest”, features nutty inventor Professor O G Wotasnozzle. I know, I know, you’re all deeply interested in working out correct story guides for this.

    Anyway, the Sea Hag in arguing against scientists claims “they can’t even think! They have to use machines to tell them what two plus two is!! And another machine to prove the first one was right!” It’s a funny line and remarkably pointed for an early-80s Popeye comic. The complaint that computers leave one unable to do even simple reasoning is an old one, of course. The complaint has been brought against every device or technique that promises to lighten a required mental effort. It seems to me similar to the way new kinds of weapons are accused of making war too monstrous and too unchivalrous, too easily done by cowards. I suppose it’s also the way a fable like the story of John Henry hold up human muscle against the indignity of mechanical work.

    The crack about needing another machine to prove the first was right is less usual, though. Sagendorf may have meant to be whimsically funny, but he hit on something true. One of the great projects of late 19th and early 20th century mathematics was the attempt to place its foundations on strict logic, independent of all human intuition. (Intuition can be a great guide, but it can lead one astray.) Out of this came a study of proofs as objects, as mathematical constructs which must themselves follow certain rules.

    And here we reach a spooky borderland between mathematics and sorcery. We can create a proof system that is, in a way, a language with a grammar. A string of symbols that satisfies all the grammatical rules is itself a proof, a valid argument following from the axioms of the system. (The axioms are some basic set of statements which we declare to be true by assumption.) And it does not matter how the symbols are assembled: by mathematician, by undergrad student worker, by monkey at a specialized typewriter, by a computer stringing things together. Once a grammatically valid string of symbols is done, that string of symbols is a theorem, with its proof written out. The proof is the string of symbols that is the theorem written out. If it were not for the modesty of what is claimed to be done — proofs about arithmetic or geometry or the like — one might think we had left behind mathematics and were now summoning demons by declaring their True Names. Or risk the stars overhead going out, one by one.

    So it is possible to create a machine that simply grinds out proofs. Or, since this is the 21st century, a computer that does that. If the computer is given no guidance it may spit out all sorts of theorems that are true but boring. But we can set up a system by which the computer, by itself, works out whether a given theorem does follow from the axioms of mathematics. More, this has been done. It’s a bit of a pain, because any proofs that are complicated enough to really need checking involve an incredible number of steps. But for a challenging enough proof it is worth doing, and automated proof checking is one of the tools mathematicians can now draw on.

    Of course, then we have the problem of knowing that the computer is carrying out its automatic-proof programming correctly. I’m not stepping into that kind of trouble.

    The attempt to divorce mathematics from all human intuition was a fruitful one. The most awe-inspiring discovery to come from it is surely that of incompleteness. Any mathematical system interesting enough will contain within it statements that are true, but can’t be proven true from the axioms.

    Georgia Dunn’s Breaking Cat News for the 22nd of October features a Venn Diagram. It’s part of how cats attempt to understand toddlers. My understanding is that their work is correct.

     
  • Joseph Nebus 2:00 pm on Sunday, 11 October, 2015 Permalink | Reply
    Tags: , logic,   

    The Kind Of Book That Makes Me Want To Refocus On Logic 


    For my birthday my love gave me John Stillwell’s Roads to Infinity: The Mathematics of Truth and Proof. It was a wonderful read. More, it’s the sort of read that gets me excited about a subject.

    The subject in this case is mathematical logic, and specifically the sections of it which describe infinitely large sets, and the provability of theorems. That these are entwined subjects may seem superficially odd. Stillwell explains well how the insights developed in talking about infinitely large sets develops the tools to study whether logical systems are complete and decidable.

    At least it explains it well to me. I know I’m not a typical reader. I’m not certain if I would have understood the book as well as I did if I hadn’t had a senior-level course in mathematical logic. And that was a long time ago, but it was also the only mathematics course which described approaches to killing the Hydra. Stillwell’s book talks about it too and I admit I appreciate the refresher. (Yeah, this is not a literal magical all-but-immortal multi-headed beast mathematicians deal with. It’s also not the little sea creature. What mathematicians mean by a ‘hydra’ is a branching graph which looks kind of like a grape vine, and by ‘slaying’ it we mean removing branches according to particular rules that make it not obvious that we’ll ever get to finish.)

    I appreciate also — maybe as much as I liked the logic — the historical context. The development of how mathematicians understand infinity and decidability is the sort of human tale that people don’t realize even exists. One of my favorite sections mentioned a sequence in which great minds, Gödel among them, took turns not understanding the reasoning behind some new important and now-generally-accepted breakthroughs.

    So I’m left feeling I want to recommend the book, although I’m not sure who to. It’s obviously a book that scouts out mathematical logic in ways that make sense if you aren’t a logician. But it uses — as it must — the notation and conventions and common concepts of mathematical logic. My love, a philosopher by trade, would probably have no trouble understanding any particular argument, and would probably pick up symbols as they’re introduced. But there’d have to be a lot of double-checking notes about definitions. And the easy familiarity with non-commutative multiplication is a mathematics-major thing, and to a lesser extent a physics-major thing. Someone without that background would fairly worry something weird was going on other than the weirdness that was going on.

    Anyway, the book spoke to a particular kind of mathematics I’d loved and never had the chance to do much with. If this is a field you feel some love for, and have some training in, then it may be right for you.

     
    • Barb Knowles 12:32 pm on Monday, 12 October, 2015 Permalink | Reply

      I am not a mathematician by any stretch of the imagination. But I love words and ideas and the birth and history of those words and ideas. Thank you for this interesting post.

      Like

  • Joseph Nebus 3:00 pm on Sunday, 13 September, 2015 Permalink | Reply
    Tags: , logic, , ,   

    Reading the Comics, September 10, 2015: Back To School Edition 


    I assume that Comic Strip Master Command ordered many mathematically-themed comic strips to coincide with the United States school system getting back up to full. That or they knew I’d have a busy week. This is only the first part of comic strips that have appeared since Tuesday.

    Mel Henze’s Gentle Creatures for the 7th and the 8th of September use mathematical talk to fill out the technobabble. It’s a cute enough notion. These particular strips ran last year, and I talked about them then. The talk of a “Lagrangian model” interests me. It name-checks a real and important and interesting scientist who’s not Einstein or Stephen Hawking. But I’m still not aware of any “Lagrangian model” that would be relevant to starship operations.

    Jon Rosenberg’s Scenes from a Multiverse for the 7th of September speaks of a society of “powerful thaumaturgic diagrammers” who used Venn diagrams not wisely but too well. The diagrammers got into trouble when one made “a Venn diagram that showed the intersection of all the Venns and all the diagrams”. I imagine this not to be a rigorous description of what happened. But Venn diagrams match up well with many logic problems. And self-referential logic, logic statements that describe their own truth or falsity, is often problematic. So I would accept a story in which Venn diagrams about Venn diagrams leads to trouble. The motif of tying logic and mathematics into magic is an old one. I understand it. A clever mathematical argument often feels like magic, especially the surprising ones. To me, the magical theorems are those that prove a set of seemingly irrelevant lemmas. Then, with that stock in hand, the theorem goes on to the main point in a few wondrous lines. If you can do that, why not transmute lead, or accidentally retcon a society out of existence?

    Mark Anderson’s Andertoons for the 8th of September just delights me. Occasionally I feel a bit like Mark Anderson’s volunteer publicity department. A panel like this, though, makes me feel that he deserves it.

    Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 8th of September is the first anthropomorphic-geometric-figures joke we’ve had here in a while.

    Mike Baldwin’s Cornered for the 9th of September is a drug testing joke, and a gambling joke. Both are subjects driven by probabilities. Any truly interesting system is always changing. If we want to know whether something affects the system we have to know whether we can make a change that’s bigger than the system does on its own. And this gives us drug-testing and other statistical inference tests. If we apply a drug, or some treatment, or whatever, how does the system change? Does it change enough, consistently, that it’s not plausible that the change just happened by chance? Or by some other influence?

    You might have noticed a controversy going around psychology journals. A fair number of experiments were re-run, by new experimenters following the original protocols as closely as possible. Quite a few of the reported results didn’t happen again, or happened in a weaker way. That’s produced some handwringing. No one thinks deliberate experimental fraud is that widespread in the field. There may be accidental fraud, people choosing data or analyses that heighten the effect they want to prove, or that pick out any effect. However, it may also simply be chance again. Psychology experiments tend to have a lower threshold of “this is sufficiently improbable that it indicates something is happening” than, say, physics has. Psychology has a harder time getting the raw data. A supercollider has enormous startup costs, but you can run the thing for as long as you like. And every electron is the same thing. A test of how sleep deprivation affects driving skills? That’s hard. No two sleepers or drivers are quite alike, even at different times of the day. There’s not an obvious cure. Independent replication of previously done experiments helps. That’s work that isn’t exciting — necessary as it is, it’s also repeating what others did — and it’s harder to get people to do it, or pay for it. But in the meantime it’s harder to be sure what interesting results to trust.

    Ruben Bolling’s Super-Fun-Pak Comix for the 9th of September is another Chaos Butterfly installment. I don’t want to get folks too excited for posts I technically haven’t written yet, but there is more Chaos Butterfly soon.

    Rick Stromoski’s Soup To Nutz for the 10th of September has Royboy guess the odds of winning a lottery are 50-50. Silly, yes, but only because we know that anyone is much more likely to lose a lottery than to win it. But then how do we know that?

    Since the rules of a lottery are laid out clearly we can reason about the probability of winning. We can calculate the number of possible outcomes of the game, and how many of them count as winning. Suppose each of those possible outcomes are equally likely. Then the probability of winning is the number of winning outcomes divided by the number of probable outcomes. Quite easy.

    — Of course, that’s exactly what Royboy did. There’s two possible outcomes, winning or losing. Lacking reason to think they aren’t equally likely he concluded a win and a loss were just as probable.

    We have to be careful what we mean by “an outcome”. What we probably mean for a drawn-numbers lottery is the number of ways the lottery numbers can be drawn. For a scratch-off card we mean the number of tickets that can be printed. But we’re still stuck with this idea of “equally likely” outcomes. I suspect we know what we mean by this, but trying to say what that is clearly, and without question-begging, is hard. And even this works only because we know the rules by which the lottery operates. Or we can look them up. If we didn’t know the details of the lottery’s workings, past the assumption that it has consistently followed rules, what could we do?

    Well, that’s what we have probability classes for, and particularly the field of Bayesian probability. This field tries to estimate the probabilities of things based on what actually happens. Suppose Royboy played the lottery fifty times and lost every time. That would smash the idea that his chances were 50-50, although that would not yet tell him what the chances really are.

     
    • ivasallay 5:33 pm on Tuesday, 15 September, 2015 Permalink | Reply

      Soup to Nutz could make a worthwhile classroom discussion.

      Like

      • Joseph Nebus 12:18 am on Friday, 18 September, 2015 Permalink | Reply

        Not just a discussion — you could almost hang a whole course in probability on this one! I had to restrain myself from writing forever about it and just publish already.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: