Tagged: logic Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 4:00 pm on Sunday, 18 June, 2017 Permalink | Reply
    Tags: , , , Flash Gordon, Francis, logic, ,   

    Reading the Comics, June 17, 2017: Icons Of Mathematics Edition 


    Comic Strip Master Command just barely missed being busy enough for me to split the week’s edition. Fine for them, I suppose, although it means I’m going to have to scramble together something for the Tuesday or the Thursday posting slot. Ah well. As befits the comics, there’s a fair bit of mathematics as an icon in the past week’s selections. So let’s discuss.

    Mark Anderson’s Andertoons for the 11th is our Mark Anderson’s Andertoons for this essay. Kind of a relief to have that in right away. And while the cartoon shows a real disaster of a student at the chalkboard, there is some truth to the caption. Ruling out plausible-looking wrong answers is progress, usually. So is coming up with plausible-looking answers to work out whether they’re right or wrong. The troubling part here, I’d say, is that the kid came up with pretty poor guesses about what the answer might be. He ought to be able to guess that it’s got to be an odd number, and has to be less than 10, and really ought to be less than 7. If you spot that then you can’t make more than two wrong guesses.

    Patrick J Marrin’s Francis for the 12th starts with what sounds like a logical paradox, about whether the Pope could make an infallibly true statement that he was not infallible. Really it sounds like a bit of nonsense. But the limits of what we can know about a logical system will often involve questions of this form. We ask whether something can prove whether it is provable, for example, and come up with a rigorous answer. So that’s the mathematical content which justifies my including this strip here.

    Border Collis are, as we know, highly intelligent. The dogs are gathered around a chalkboard full of mathematics. 'I've checked my calculations three times. Even if master's firm and calm and behaves like an alpha male, we *should* be able to whip him.'

    Niklas Eriksson’s Carpe Diem for the 13th of June, 2017. Yes, yes, it’s easy to get people excited for the Revolution, but it’ll come to a halt when someone asks about how they get the groceries afterwards.

    Niklas Eriksson’s Carpe Diem for the 13th is a traditional use of the blackboard full of mathematics as symbolic of intelligence. Of course ‘E = mc2‘ gets in there. I’m surprised that both π and 3.14 do, too, for as little as we see on the board.

    Mark Anderson’s Andertoons for the 14th is a nice bit of reassurance. Maybe the cartoonist was worried this would be a split-week edition. The kid seems to be the same one as the 11th, but the teacher looks different. Anyway there’s a lot you can tell about shapes from their perimeter alone. The one which most startles me comes up in calculus: by doing the right calculation about the lengths and directions of the edge of a shape you can tell how much area is inside the shape. There’s a lot of stuff in this field — multivariable calculus — that’s about swapping between “stuff you know about the boundary of a shape” and “stuff you know about the interior of the shape”. And finding area from tracing the boundary is one of them. It’s still glorious.

    Samson’s Dark Side Of The Horse for the 14th is a counting-sheep joke and a Pi Day joke. I suspect the digits of π would be horrible for lulling one to sleep, though. They lack the just-enough-order that something needs for a semiconscious mind to drift off. Horace would probably be better off working out Collatz sequences.

    Dana Simpson’s Phoebe and her Unicorn for the 14th mentions mathematics as iconic of what you do at school. Book reports also make the cut.

    Dr Zarkov: 'Flash, this is Professor Quita, the inventor of the ... ' Prof Quita: 'Caramba! NO! I am a mere mathematician! With numbers, equations, paper, pencil, I work ... it is my good amigo, Dr Zarkov, who takes my theories and builds ... THAT!!' He points to a bigger TV screen.

    Dan Barry’s Flash Gordon for the 31st of July, 1962, rerun the 16th of June, 2017. I am impressed that Dr Zarkov can make a TV set capable of viewing alternate universes. I still literally do not know how it is possible that we have sound for our new TV set, and I labelled and connected every single wire in the thing. Oh, wouldn’t it be a kick if Dr Zarkov has the picture from one alternate universe but the sound from a slightly different other one?

    Dan Barry’s Flash Gordon for the 31st of July, 1962 and rerun the 16th I’m including just because I love the old-fashioned image of a mathematician in Professor Quita here. At this point in the comic strip’s run it was set in the far-distant future year of 1972, and the action here is on one of the busy multinational giant space stations. Flash himself is just back from Venus where he’d set up some dolphins as assistants to a fish-farming operation helping to feed that world and ours. And for all that early-60s futurism look at that gorgeous old adding machine he’s still got. (Professor Quinta’s discovery is a way to peer into alternate universes, according to the next day’s strip. I’m kind of hoping this means they’re going to spend a week reading Buck Rogers.)

     
  • Joseph Nebus 4:00 pm on Friday, 9 June, 2017 Permalink | Reply
    Tags: , logic, , , Mr Boffo, , perfect numbers, Pop Culture Shock Therapy, ,   

    Reading the Comics, June 3, 2017: Feast Week Conclusion Edition 


    And now finally I can close out last week’s many mathematically-themed comic strips. I had hoped to post this Thursday, but the Why Stuff Can Orbit supplemental took up my writing energies and eventually timeslot. This also ends up being the first time I’ve had one of Joe Martin’s comic strips since the Houston Chronicle ended its comics pages and I admit I’m not sure how I’m going to work this. I’m also not perfectly sure what the comic strip means.

    So Joe Martin’s Mister Boffo for the 1st of June seems to be about a disastrous mathematics exam with a kid bad enough he hasn’t even got numbers exactly to express the score. Also I’m not sure there is a way to link to the strip I mean exactly; the archives for Martin’s strips are not … organized the way I would have done. Well, they’re his business.

    A Time To Worry: '[Our son] says he got a one-de-two-three-z on the math test.'

    So Joe Martin’s Mister Boffo for the 1st of June, 2017. The link is probably worthless, since I can’t figure out how to work its archives. Good luck yourselves with it.

    Greg Evans’s Luann Againn for the 1st reruns the strip from the 1st of June, 1989. It’s your standard resisting-the-word-problem joke. On first reading the strip I didn’t get what the problem was asking for, and supposed that the text had garbled the problem, if there were an original problem. That was my sloppiness is all; it’s a perfectly solvable question once you actually read it.

    J C Duffy’s Lug Nuts for the 1st — another day that threatened to be a Reading the Comics post all on its own — is a straggler Pi Day joke. It’s just some Dadaist clowning about.

    Doug Bratton’s Pop Culture Shock Therapy for the 1st is a wordplay joke that uses word problems as emblematic of mathematics. I’m okay with that; much of the mathematics that people actually want to do amounts to extracting from a situation the things that are relevant and forming an equation based on that. This is what a word problem is supposed to teach us to do.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 1st — maybe I should have done a Reading the Comics for that day alone — riffs on the idle speculation that God would be a mathematician. It does this by showing a God uninterested in two logical problems. The first is the question of whether there’s an odd perfect number. Perfect numbers are these things that haunt number theory. (Everything haunts number theory.) It starts with idly noticing what happens if you pick a number, find the numbers that divide into it, and add those up. For example, 4 can be divided by 1 and 2; those add to 3. 5 can only be divided by 1; that adds to 1. 6 can be divided by 1, 2, and 3; those add to 6. For a perfect number the divisors add up to the original number. Perfect numbers look rare; for a thousand years or so only four of them (6, 28, 496, and 8128) were known to exist.

    All the perfect numbers we know of are even. More, they’re all numbers that can be written as the product 2^{p - 1} \cdot \left(2^p - 1\right) for certain prime numbers ‘p’. (They’re the ones for which 2^p - 1 is itself a prime number.) What we don’t know, and haven’t got a hint about proving, is whether there are any odd prime numbers. We know some things about odd perfect numbers, if they exist, the most notable of them being that they’ve got to be incredibly huge numbers, much larger than a googol, the standard idea of an incredibly huge number. Presumably an omniscient God would be able to tell whether there were an odd perfect number, or at least would be able to care whether there were. (It’s also not known if there are infinitely many perfect numbers, by the way. This reminds us that number theory is pretty much nothing but a bunch of easy-to-state problems that we can’t solve.)

    Some miscellaneous other things we know about an odd perfect number, other than whether any exist: if there are odd perfect numbers, they’re not divisible by 105. They’re equal to one more than a whole multiple of 12. They’re also 117 more than a whole multiple of 468, and they’re 81 more than a whole multiple of 324. They’ve got to have at least 101 prime factors, and there have to be at least ten distinct prime factors. There have to be at least twelve distinct prime factors if 3 isn’t a factor of the odd perfect number. If this seems like a screwy list of things to know about a thing we don’t even know exists, then welcome to number theory.

    The beard question I believe is a reference to the logician’s paradox. This is the one postulating a village in which the village barber shaves all, but only, the people who do not shave themselves. Given that, who shaves the barber? It’s an old joke, but if you take it seriously you learn something about the limits of what a system of logic can tell you about itself.

    Tiger: 'I've got two plus four hours of homework. I won't be finished until ten minus three o'clock, or maybe even six plus one and a half o'clock.' Punkin: 'What subject?' Tiger: 'Arithmetic, stupid!'

    Bud Blake’s Tiger rerun for the 2nd of June, 2017. Bonus arithmetic problem: what’s the latest time that this could be? Also, don’t you like how the dog’s tail spills over the panel borders twice? I do.

    Bud Blake’s Tiger rerun for the 2nd has Tiger’s arithmetic homework spill out into real life. This happens sometimes.

    Officer Pupp: 'That Mouse is most sure an oaf of awful dumbness, Mrs Kwakk Wakk - y'know that?' Mrs Kwakk Wakk: 'By what means do you find proof of this, Officer Pupp?' 'His sense of speed is insipid - he doesn't seem to know that if I ran 60 miles an hour, and he only 40, that I would eventually catch up to him.' 'No-' 'Yes- I tell you- yes.' 'He seemed to know that a brick going 60 would catch up to a kat going 40.' 'Oh, he did, did he?' 'Why, yes.'

    George Herriman’s Krazy Kat for the 10th of July, 1939 and rerun the 2nd of June, 2017. I realize that by contemporary standards this is a very talky comic strip. But read Officer Pupp’s dialogue, particularly in the second panel. It just flows with a wonderful archness.

    George Herriman’s Krazy Kat for the 10th of July, 1939 was rerun the 2nd of June. I’m not sure that it properly fits here, but the talk about Officer Pupp running at 60 miles per hour and Ignatz Mouse running forty and whether Pupp will catch Mouse sure reads like a word problem. Later strips in the sequence, including the ways that a tossed brick could hit someone who’d be running faster than it, did not change my mind about this. Plus I like Krazy Kat so I’ll take a flimsy excuse to feature it.

     
    • Joshua K. 1:33 am on Saturday, 10 June, 2017 Permalink | Reply

      I thought that the second question in “Saturday Morning Breakfast Cereal” was meant to imply that mathematicians often have beards; therefore, if God would prefer not to have a beard, he probably isn’t a mathematician.

      Like

      • Joseph Nebus 11:48 pm on Monday, 12 June, 2017 Permalink | Reply

        Oh, you may have something there. I’m so used to thinking of beards as a logic problem that I didn’t think of them as a mathematician thing. (In my defense, back in grad school I’m not sure any of the faculty had beards.). I’ll take that interpretation too.

        Like

  • Joseph Nebus 6:00 pm on Thursday, 27 April, 2017 Permalink | Reply
    Tags: , , logic, , Preteena, ,   

    Reading the Comics, April 22, 2017: Thought There’d Be Some More Last Week Edition 


    Allison Barrows’s PreTeena rerun for the 18th is a classic syllogism put into the comic strip’s terms. The thing about these sorts of deductive-logic syllogisms is that whether the argument is valid depends only on the shape of the argument. It has nothing to do with whether the thing being discussed makes any sense. This can be disorienting. It’s hard to ignore the everyday meaning of words when you hear a string of sentences. But it’s also hard to parse a string of sentences if the words don’t make sense in them. This is probably part of why on the mathematics side of things logic courses will skimp on syllogisms, using them to give an antique flavor and sense of style to the introduction of courses. It’s easier to use symbolic representations for logic instead.

    Randy Glasbergen’s Glasbergen Cartoons rerun for the 20th is the old joke about arithmetic being different between school, government, and corporate work. I haven’t looked at the comments — the GoComics redesign, whatever else it does, makes it very easy to skip the comments — but I’m guessing by the second one someone’s said the Common Core method means getting the most wrong answer.

    Dolly, coming home: 'Rithmetic would be a lot easier if it didn't have all those different numbers.'

    Bil Keane and Jeff Keane’s Family Circus for the 21st of April, 2017. In fairness, there aren’t a lot of things we need all of 6, 7, and 8 for and you can just use whatever one of those you’re good at for any calculations with the others. Promise.

    Bil Keane and Jeff Keane’s Family Circus for the 21st I don’t know is a rerun. But a lot of them are these days. Anyway, it looks like a silly joke about how nice mathematics would be without numbers; Dolly has no idea. I can sympathize with being intimidated by numerals. At the risk of being all New Math-y, I wonder if she wouldn’t like arithmetic more if it were presented as a game. Like, here’s a couple symbols — let’s say * and | for a start, and then some rules. * and * makes *, but * and | makes |. Also | and * makes |. But | and | makes |*. And so on. This is binary arithmetic, disguised, but I wonder if making it look like something inconsequential would make it more pleasant to learn, and if that would transfer over to arithmetic with 1’s and 0’s. Normal, useful arithmetic would be harder to play like this. You’d need ten symbols that are easy to write that aren’t already numbers, letters, or common symbols. But I wonder if it’d be worth it.

    Tom Thaves’s Frank and Ernest for the 22nd is provided for mathematics teachers who need something to tape to their door. You’re welcome.

     
  • Joseph Nebus 6:00 pm on Thursday, 9 March, 2017 Permalink | Reply
    Tags: , logic,   

    Words About A Wordless Induction Proof 


    This pair of tweets came across my feed. And who doesn’t like a good visual proof of a mathematical fact? I hope you enjoy.

    So here’s the proposition.

    This is the sort of identity we normally try proving by induction. Induction is a great scheme for proving identities like this. It works by finding some index on the formula. Then show that if the formula is true for one value of the index, then it’s true for the next-higher value of the index. Finally, find some value of the index for which it’s easy to check that the formula’s true. And that proves it’s true for all the values of that index above that base.

    In this case the index is ‘n’. It’s really easy to prove the base case, since 13 is equal to 12 what with ‘1’ being the number everybody likes to raise to powers. Going from proving that if it’s true in one case — 1^3 + 2^3 + 3^3 + \cdots + n^3 — then it’s true for the next — 1^3 + 2^3 + 3^3 + \cdots + n^3 + (n + 1)^3 — is work. But you can get it done.

    And then there’s this, done visually:

    It took me a bit to read fully until I was confident in what it was showing. But it is all there.

    As often happens with these wordless proofs you can ask whether it is properly speaking a proof. A proof is an argument and to be complete it has to contain every step needed to deduce the conclusion from the premises, following one of the rules of inference each step. Thing is basically no proof is complete that way, because it takes forever. We elide stuff that seems obvious, confident that if we had to we could fill in the intermediate steps. A wordless proof like trusts that if we try to describe what is in the picture then we are constructing the argument.

    That’s surely enough of my words.

     
  • Joseph Nebus 6:00 pm on Tuesday, 21 February, 2017 Permalink | Reply
    Tags: automated proofs, , , logic,   

    One Way To Get Your Own Theorem 


    While doing some research to better grouse about Ken Keeler’s Futurama theorem I ran across an amusing site I hadn’t known about. It is Theory Mine, a site that allows you to hire — and name — a genuine, mathematically sound theorem. The spirit of the thing is akin to that scam in which you “name” a star. But this is more legitimate in that, you know, it’s got any legitimacy. For this, you’re buying naming rights from someone who has any rights to sell. By convention the discoverer of a theorem can name it whatever she wishes, and there’s one chance in ten that anyone else will use the name.

    I haven’t used it. I’ve made my own theorems, thanks, and could put them on a coffee mug or t-shirt if I wished to make a particularly boring t-shirt. But I’m delighted by the scheme. They don’t have a team of freelance mathematicians whipping up stuff and hoping it isn’t already known. Not for the kinds of prices they charge. This should inspire the question: well, where do the theorems come from?

    The scheme uses an automated reasoning system. I don’t know the details of how it works, but I can think of a system by which this might work. It goes back to the Crisis of Foundations, the time in the late 19th/early 20th century when logicians got very worried that we were still letting physical intuitions and unstated assumptions stay in our mathematics. One solution: turn everything into symbols, icons with no connotations. The axioms of mathematics become a couple basic symbols. The laws of logical deduction become things we can do with the symbols, converting one line of symbols into a related other line. Every line we get is a theorem. And we know it’s correct. To write out the theorem in this scheme is to write out its proof, and to feel like you’re touching some deep magic. And there’s no human frailties in the system, besides the thrill of reeling off True Names like that.

    You may not be sure what this works like. It may help to compare it to a slightly-fun number coding scheme. I mean the one where you start with a number, like, ‘1’. Then you write down how many times and which digit appears. There’s a single ‘1’ in that string, so you would write down ’11’. And repeat: In ’11’ there’s a sequence of two ‘1’s, so you would write down ’21’. And repeat: there’s a single ‘2’ and a single ‘1’, so you then write down ‘1211’. And again: there’s a single ‘1’, a single ‘2’, and then a double ‘1’, so you next write ‘111221’. And so on until you get bored or die.

    When we do this for mathematics we start with a couple different basic units. And we also start with several things we may do at most symbols. So there’s rarely a single line that follows from the previous. There’s an ever-expanding tree of known truths. This may stave off boredom but I make no promises about death.

    The result of this is pages and pages that look like Ancient High Martian. I don’t feel the thrill of doing this. Some people do, though. And as recreational mathematics goes I suppose it’s at least as good as sudoku. Anyway, this kind of project, rewarding indefatigability and thoroughness, is perfect for automation anyway. Let the computer work out all the things we can prove are true.

    If I’m reading Theory Mine’s description correctly they seem to be doing something roughly like this. If they’re not, well, you go ahead and make your own rival service using my paragraphs as your system. All I ask is one penny for every use of L’Hôpital’s Rule, a theorem named for Guillaume de l’Hôpital and discovered by Johann Bernoulli. (I have heard that Bernoulli was paid for his work, but I do not know that’s true. I have now explained why, if we suppose that to be true, my prior sentence is a very funny joke and you should at minimum chuckle.)

    This should inspire the question: what do we need mathematicians for, then? It’s for the same reason we need writers, when it would be possible to automate the composing of sentences that satisfy the rules of English grammar. I mean if there were rules to English grammar. That we can identify a theorem that’s true does not mean it has even the slightest interest to anyone, ever. There’s much more that could be known than that we could ever care about.

    You can see this in Theory Mine’s example of Quentin’s Theorem. Quentin’s Theorem is about an operation you can do on a set whose elements consist of the non-negative whole numbers with a separate value, which they call color, attached. You can add these colored-numbers together according to some particular rules about how the values and the colors add. The order of this addition normally matters: blue two plus green three isn’t the same as green three plus blue two. Quentin’s Theorem finds cases where, if you add enough colored-numbers together, the order doesn’t matter. I know. I am also staggered by how useful this fact promises to be.

    Yeah, maybe there is some use. I don’t know what it is. If anyone’s going to find the use it’ll be a mathematician. Or a physicist who’s found some bizarre quark properties she wants to codify. Anyway, if what you’re interested in is “what can you do to make a vertical column stable?” then the automatic proof generator isn’t helping you at all. Not without a lot of work put in to guiding it. So we can skip the hard work of finding and proving theorems, if we can do the hard work of figuring out where to look for these theorems instead. Always the way.

    You also may wonder how we know the computer is doing its work right. It’s possible to write software that is logically proven to be correct. That is, the software can’t produce anything but the designed behavior. We don’t usually write software this way. It’s harder to write, because you have to actually design your software’s behavior. And we can get away without doing it. Usually there’s some human overseeing the results who can say what to do if the software seems to be going wrong. Advocates of logically-proven software point out that we’re getting more software, often passing results on to other programs. This can turn a bug in one program into a bug in the whole world faster than a responsible human can say, “I dunno. Did you try turning it off and on again?” I’d like to think we could get more logically-proven software. But I also fear I couldn’t write software that sound and, you know, mathematics blogging isn’t earning me enough to eat on.

    Also, yes, even proven software will malfunction if the hardware the computer’s on malfunctions. That’s rare, but does happen. Fortunately, it’s possible to automate the checking of a proof, and that’s easier to do than creating a proof in the first place. We just have to prove we have the proof-checker working. Certainty would be a nice thing if we ever got it, I suppose.

     
    • mathtuition88 5:01 am on Wednesday, 22 February, 2017 Permalink | Reply

      Computers are getting more amazing!

      Like

      • Joseph Nebus 4:19 pm on Saturday, 25 February, 2017 Permalink | Reply

        They are astounding, which makes it only the more baffling that we can’t get iTunes to reliably download new episodes of a podcast we’re subscribed to and listen to every week.

        Liked by 1 person

    • Henry Game 9:40 am on Wednesday, 22 February, 2017 Permalink | Reply

      One day I’d like you to explain the magic of numbers, vortex maths etc to me. I am interested in numerology, ancient geography, Metatron’s cube and all that, but, for some reason, I have never studied maths.
      Maybe you could inspire me and advise me where to start?
      I thoroughly enjoy your posts, when I come across them, but half the time I am blown away. 😂

      Liked by 1 person

      • Joseph Nebus 4:27 pm on Saturday, 25 February, 2017 Permalink | Reply

        Well, hm. I’m not sure about literal magic of numbers, as in numerology and the like. For what’s wonderful about mathematics … I’m still not perfectly sure. I think I’d give a try of Courant and Robbins’s What Is Mathematics?, originally published in 1940 but still in print and updated, and your library (or university library) will have copies. It’s a little survey of a lot of the fields of mathematics. And it’s mostly episodic, so if one section isn’t doing anything for you it’s fine to skip to the next, or just to pick a section arbitrarily and see what’s going on there.

        And I’m glad you enjoy stuff around here, but if you do get stuck on something please say so! It’s very hard for me to guess what people don’t know, and there’s usually a good post to be made in explaining why something confused someone.

        Liked by 1 person

      • mathtuition88 4:34 pm on Saturday, 25 February, 2017 Permalink | Reply

        I just checked out metatron’s cube, looks really cool.

        Like

  • Joseph Nebus 6:00 pm on Sunday, 8 January, 2017 Permalink | Reply
    Tags: , , Birdbrains, , Elderberries, Grand Avenue, logic, , ,   

    Reading the Comics, January 7, 2016: Just Before GoComics Breaks Everything Edition 


    Most of the comics I review here are printed on GoComics.com. Well, most of the comics I read online are from there. But even so I think they have more comic strips that mention mathematical themes. Anyway, they’re unleashing a complete web site redesign on Monday. I don’t know just what the final version will look like. I know that the beta versions included the incredibly useful, that is to say dumb, feature where if a particular comic you do read doesn’t have an update for the day — and many of them don’t, as they’re weekly or three-times-a-week or so — then it’ll show some other comic in its place. I mean, the idea of encouraging people to find new comics is a good one. To some extent that’s what I do here. But the beta made no distinction between “comic you don’t read because you never heard of Microcosm” and “comic you don’t read because glancing at it makes your eyes bleed”. And on an idiosyncratic note, I read a lot of comics. I don’t need to see Dude and Dude reruns in fourteen spots on my daily comics page, even if I didn’t mind it to start.

    Anyway. I am hoping, desperately hoping, that with the new site all my old links to comics are going to keep working. If they don’t then I suppose I’m just ruined. We’ll see. My suggestion is if you’re at all curious about the comics you read them today (Sunday) just to be safe.

    Ashleigh Brilliant’s Pot-Shots is a curious little strip I never knew of until GoComics picked it up a few years ago. Its format is compellingly simple: a little illustration alongside a wry, often despairing, caption. I love it, but I also understand why was the subject of endless queries to the Detroit Free Press (Or Whatever) about why was this thing taking up newspaper space. The strip rerun the 31st of December is a typical example of the strip and amuses me at least. And it uses arithmetic as the way to communicate reasoning, both good and bad. Brilliant’s joke does address something that logicians have to face, too. Whether an argument is logically valid depends entirely on its structure. If the form is correct the reasoning may be excellent. But to be sound an argument has to be correct and must also have its assumptions be true. We can separate whether an argument is right from whether it could ever possibly be right. If you don’t see the value in that, you have never participated in an online debate about where James T Kirk was born and whether Spock was the first Vulcan in Star Fleet.

    Thom Bluemel’s Birdbrains for the 2nd of January, 2017, is a loaded-dice joke. Is this truly mathematics? Statistics, at least? Close enough for the start of the year, I suppose. Working out whether a die is loaded is one of the things any gambler would like to know, and that mathematicians might be called upon to identify or exploit. (I had a grandmother unshakably convinced that I would have some natural ability to beat the Atlantic City casinos if she could only sneak the underaged me in. I doubt I could do anything of value there besides see the stage magic show.)

    Jack Pullan’s Boomerangs rerun for the 2nd is built on the one bit of statistical mechanics that everybody knows, that something or other about entropy always increasing. It’s not a quantum mechanics rule, but it’s a natural confusion. Quantum mechanics has the reputation as the source of all the most solid, irrefutable laws of the universe’s working. Statistical mechanics and thermodynamics have this musty odor of 19th-century steam engines, no matter how much there is to learn from there. Anyway, the collapse of systems into disorder is not an irrevocable thing. It takes only energy or luck to overcome disorderliness. And in many cases we can substitute time for luck.

    Scott Hilburn’s The Argyle Sweater for the 3rd is the anthropomorphic-geometry-figure joke that’s I’ve been waiting for. I had thought Hilburn did this all the time, although a quick review of Reading the Comics posts suggests he’s been more about anthropomorphic numerals the past year. This is why I log even the boring strips: you never know when I’ll need to check the last time Scott Hilburn used “acute” to mean “cute” in reference to triangles.

    Mike Thompson’s Grand Avenue uses some arithmetic as the visual cue for “any old kind of schoolwork, really”. Steve Breen’s name seems to have gone entirely from the comic strip. On Usenet group rec.arts.comics.strips Brian Henke found that Breen’s name hasn’t actually been on the comic strip since May, and D D Degg found a July 2014 interview indicating Thompson had mostly taken the strip over from originator Breen.

    Mark Anderson’s Andertoons for the 5th is another name-drop that doesn’t have any real mathematics content. But come on, we’re talking Andertoons here. If I skipped it the world might end or something untoward like that.

    'Now for my math homework. I've got a comfortable chair, a good light, plenty of paper, a sharp pencil, a new eraser, and a terrific urge to go out and play some ball.'

    Ted Shearer’s Quincy for the 14th of November, 1977, and reprinted the 7th of January, 2017. I kind of remember having a lamp like that. I don’t remember ever sitting down to do my mathematics homework with a paintbrush.

    Ted Shearer’s Quincy for the 14th of November, 1977, doesn’t have any mathematical content really. Just a mention. But I need some kind of visual appeal for this essay and Shearer is usually good for that.

    Corey Pandolph, Phil Frank, and Joe Troise’s The Elderberries rerun for the 7th is also a very marginal mention. But, what the heck, it’s got some of your standard wordplay about angles and it’ll get this week’s essay that much closer to 800 words.

     
  • Joseph Nebus 6:00 pm on Saturday, 31 December, 2016 Permalink | Reply
    Tags: 19th Century, , Axiom of Choice, continuum hypothesis, , , logic, , ZFC   

    The End 2016 Mathematics A To Z: Zermelo-Fraenkel Axioms 


    gaurish gave me a choice for the Z-term to finish off the End 2016 A To Z. I appreciate it. I’m picking the more abstract thing because I’m not sure that I can explain zero briefly. The foundations of mathematics are a lot easier.

    Zermelo-Fraenkel Axioms

    I remember the look on my father’s face when I asked if he’d tell me what he knew about sets. He misheard what I was asking about. When we had that straightened out my father admitted that he didn’t know anything particular. I thanked him and went off disappointed. In hindsight, I kind of understand why everyone treated me like that in middle school.

    My father’s always quick to dismiss how much mathematics he knows, or could understand. It’s a common habit. But in this case he was probably right. I knew a bit about set theory as a kid because I came to mathematics late in the “New Math” wave. Sets were seen as fundamental to why mathematics worked without being so exotic that kids couldn’t understand them. Perhaps so; both my love and I delighted in what we got of set theory as kids. But if you grew up before that stuff was popular you probably had a vague, intuitive, and imprecise idea of what sets were. Mathematicians had only a vague, intuitive, and imprecise idea of what sets were through to the late 19th century.

    And then came what mathematics majors hear of as the Crisis of Foundations. (Or a similar name, like Foundational Crisis. I suspect there are dialect differences here.) It reflected mathematics taking seriously one of its ideals: that everything in it could be deduced from clearly stated axioms and definitions using logically rigorous arguments. As often happens, taking one’s ideals seriously produces great turmoil and strife.

    Before about 1900 we could get away with saying that a set was a bunch of things which all satisfied some description. That’s how I would describe it to a new acquaintance if I didn’t want to be treated like I was in middle school. The definition is fine if we don’t look at it too hard. “The set of all roots of this polynomial”. “The set of all rectangles with area 2”. “The set of all animals with four-fingered front paws”. “The set of all houses in Central New Jersey that are yellow”. That’s all fine.

    And then if we try to be logically rigorous we get problems. We always did, though. They’re embodied by ancient jokes like the person from Crete who declared that all Cretans always lie; is the statement true? Or the slightly less ancient joke about the barber who shaves only the men who do not shave themselves; does he shave himself? If not jokes these should at least be puzzles faced in fairy-tale quests. Logicians dressed this up some. Bertrand Russell gave us the quite respectable “The set consisting of all sets which are not members of themselves”, and asked us to stare hard into that set. To this we have only one logical response, which is to shout, “Look at that big, distracting thing!” and run away. This satisfies the problem only for a while.

    The while ended in — well, that took a while too. But between 1908 and the early 1920s Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem paused from arguing whose name would also be the best indie rock band name long enough to put set theory right. Their structure is known as Zermelo-Fraenkel Set Theory, or ZF. It gives us a reliable base for set theory that avoids any contradictions or catastrophic pitfalls. Or does so far as we have found in a century of work.

    It’s built on a set of axioms, of course. Most of them are uncontroversial, things like declaring two sets are equivalent if they have the same elements. Declaring that the union of sets is itself a set. Obvious, sure, but it’s the obvious things that we have to make axioms. Maybe you could start an argument about whether we should just assume there exists some infinitely large set. But if we’re aware sets probably have something to teach us about numbers, and that numbers can get infinitely large, then it seems fair to suppose that there must be some infinitely large set. The axioms that aren’t simple obvious things like that are too useful to do without. They assume stuff like that no set is an element of itself. Or that every set has a “power set”, a new set comprised of all the subsets of the original set. Good stuff to know.

    There is one axiom that’s controversial. Not controversial the way Euclid’s Parallel Postulate was. That’s ugly one about lines crossing another line meeting on the same side they make angles smaller than something something or other. That axiom was controversial because it read so weird, so needlessly complicated. (It isn’t; it’s exactly as complicated as it must be. Or better, it’s as simple as it could possibly be and still be useful.) The controversial axiom of Zermelo-Fraenkel Set Theory is known as the Axiom of Choice. It says if we have a collection of mutually disjoint sets, each with at least one thing in them, then it’s possible to pick exactly one item from each of the sets.

    It’s impossible to dispute this is what we have axioms for. It’s about something that feels like it should be obvious: we can always pick something from a set. How could this not be true?

    If it is true, though, we get some unsavory conclusions. For example, it becomes possible to take a ball the size of an orange and slice it up. We slice using mathematical blades. They’re not halted by something as petty as the desire not to slice atoms down the middle. We can reassemble the pieces. Into two balls. And worse, it doesn’t require we do something like cut the orange into infinitely many pieces. We expect crazy things to happen when we let infinities get involved. No, though, we can do this cut-and-duplicate thing by cutting the orange into five pieces. When you hear that it’s hard to know whether to point to the big, distracting thing and run away. If we dump the Axiom of Choice we don’t have that problem. But can we do anything useful without the ability to make a choice like that?

    And we’ve learned that we can. If we want to use the Zermelo-Fraenkel Set Theory with the Axiom of Choice we say we were working in “ZFC”, Zermelo-Fraenkel-with-Choice. We don’t have to. If we don’t want to make any assumption about choices we say we’re working in “ZF”. Which to use depends on what one wants to use.

    Either way Zermelo and Fraenkel and Skolem established set theory on the foundation we use to this day. We’re not required to use them, no; there’s a construction called von Neumann-Bernays-Gödel Set Theory that’s supposed to be more elegant. They didn’t mention it in my logic classes that I remember, though.

    And still there’s important stuff we would like to know which even ZFC can’t answer. The most famous of these is the continuum hypothesis. Everyone knows — excuse me. That’s wrong. Everyone who would be reading a pop mathematics blog knows there are different-sized infinitely-large sets. And knows that the set of integers is smaller than the set of real numbers. The question is: is there a set bigger than the integers yet smaller than the real numbers? The Continuum Hypothesis says there is not.

    Zermelo-Fraenkel Set Theory, even though it’s all about the properties of sets, can’t tell us if the Continuum Hypothesis is true. But that’s all right; it can’t tell us if it’s false, either. Whether the Continuum Hypothesis is true or false stands independent of the rest of the theory. We can assume whichever state is more useful for our work.

    Back to the ideals of mathematics. One question that produced the Crisis of Foundations was consistency. How do we know our axioms don’t contain a contradiction? It’s hard to say. Typically a set of axioms we can prove consistent are also a set too boring to do anything useful in. Zermelo-Fraenkel Set Theory, with or without the Axiom of Choice, has a lot of interesting results. Do we know the axioms are consistent?

    No, not yet. We know some of the axioms are mutually consistent, at least. And we have some results which, if true, would prove the axioms to be consistent. We don’t know if they’re true. Mathematicians are generally confident that these axioms are consistent. Mostly on the grounds that if there were a problem something would have turned up by now. It’s withstood all the obvious faults. But the universe is vaster than we imagine. We could be wrong.

    It’s hard to live up to our ideals. After a generation of valiant struggling we settle into hoping we’re doing good enough. And waiting for some brilliant mind that can get us a bit closer to what we ought to be.

     
    • elkement (Elke Stangl) 10:42 am on Sunday, 1 January, 2017 Permalink | Reply

      Very interesting – as usual! I was also subjected to the New Math in elementary school – the upside was that you got a lot of nice toys for free, as ‘add-ons’ to school books ( … plastic cubes and other toy blocks that should represents members of sets …). Not sure if it prepared one better to understand Russell’s paradox later ;-)

      Liked by 1 person

      • elkement (Elke Stangl) 10:43 am on Sunday, 1 January, 2017 Permalink | Reply

        … and I wish you a Happy New Year and more A-Zs in 2017 :-)

        Liked by 1 person

        • Joseph Nebus 5:34 am on Thursday, 5 January, 2017 Permalink | Reply

          Thanks kindly. I am going to do a fresh A-to-Z, although I don’t know just when. Not in January; haven’t got the energy for it right away.

          Liked by 1 person

      • Joseph Nebus 5:34 am on Thursday, 5 January, 2017 Permalink | Reply

        Oh, now, the toys were fantastic. I suppose it’s a fair guess whether the people who got something out of the New Math got it because they understood fundamentals better in that form or whether it was just that the toys and games made the subject more engaging.

        I am, I admit, a fan of the New Math, but that may just be because it’s the way I learned mathematics, and the way you did something as a kid is always the one natural way to do it.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Monday, 7 November, 2016 Permalink | Reply
    Tags: , , , , , , logic,   

    The End 2016 Mathematics A To Z: Cantor’s Middle Third 


    Today’s term is a request, the first of this series. It comes from HowardAt58, head of the Saving School Math blog. There are many letters not yet claimed; if you have a term you’d like to see my write about please head over to the “Any Requests?” page and pick a letter. Please not one I figure to get to in the next day or two.

    Cantor’s Middle Third.

    I think one could make a defensible history of mathematics by describing it as a series of ridiculous things that get discovered. And then, by thinking about these ridiculous things long enough, mathematicians come to accept them. Even rely on them. Sometime later the public even comes to accept them. I don’t mean to say getting people to accept ridiculous things is the point of mathematics. But there is a pattern which happens.

    Consider. People doing mathematics came to see how a number could be detached from a count or a measure of things. That we can do work on, say, “three” whether it’s three people, three kilograms, or three square meters. We’re so used to this it’s only when we try teaching mathematics to the young we realize it isn’t obvious.

    Or consider that we can have, rather than a whole number of things, a fraction. Some part of a thing, as if you could have one-half pieces of chalk or two-thirds a fruit. Counting is relatively obvious; fractions are something novel but important.

    We have “zero”; somehow, the lack of something is still a number, the way two or five or one-half might be. For that matter, “one” is a number. How can something that isn’t numerous be a number? We’re used to it anyway. We can have not just fraction and one and zero but irrational numbers, ones that can’t be represented as a fraction. We have negative numbers, somehow a lack of whatever we were counting so great that we might add some of what we were counting to the pile and still have nothing.

    That takes us up to about eight hundred years ago or something like that. The public’s gotten to accept all this as recently as maybe three hundred years ago. They’ve still got doubts. I don’t blame folks. Complex numbers mathematicians like; the public’s still getting used to the idea, but at least they’ve heard of them.

    Cantor’s Middle Third is part of the current edge. It’s something mathematicians are aware of and that defies sense at least. But we’ve come to accept it. The public, well, they don’t know about it. Maybe some do; it turns up in pop mathematics books that like sharing the strangeness of infinities. Few people read them. Sometimes it feels like all those who do go online to tell mathematicians they’re crazy. It comes to us, as you might guess from the name, from Georg Cantor. Cantor established the modern mathematical concept of how to study infinitely large sets in the late 19th century. And he was repeatedly hospitalized for depression. It’s cruel to write all that off as “and he was crazy”. His work’s withstood a hundred and thirty-five years of extremely smart people looking at it skeptically.

    The Middle Third starts out easily enough. Take a line segment. Then chop it into three equal pieces and throw away the middle third. You see where the name comes from. What do you have left? Some of the original line. Two-thirds of the original line length. A big gap in the middle.

    Now take the two line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the two pieces. Now we’re left with four chunks of line and four-ninths of the original length. One big and two little gaps in the middle.

    Now take the four little line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the four pieces. We’re left with eight chunks of line, about eight-twenty-sevenths of the original length. Lots of little gaps. Keep doing this, chopping up line segments and throwing away middle pieces. Never stop. Well, pretend you never stop and imagine what’s left.

    What’s left is deeply weird. What’s left has no length, no measure. That’s easy enough to prove. But we haven’t thrown everything away. There are bits of the original line segment left over. The left endpoint of the original line is left behind. So is the right endpoint of the original line. The endpoints of the line segments after the first time we chopped out a third? Those are left behind. The endpoints of the line segments after chopping out a third the second time, the third time? Those have to be in the set. We have a dust, isolated little spots of the original line, none of them combining together to cover any length. And there are infinitely many of these isolated dots.

    We’ve seen that before. At least we have if we’ve read anything about the Cantor Diagonal Argument. You can find that among the first ten posts of every mathematics blog. (Not this one. I was saving the subject until I had something good to say about it. Then I realized many bloggers have covered it better than I could.) Part of it is pondering how there can be a set of infinitely many things that don’t cover any length. The whole numbers are such a set and it seems reasonable they don’t cover any length. The rational numbers, though, are also an infinitely-large set that doesn’t cover any length. And there’s exactly as many rational numbers as there are whole numbers. This is unsettling but if you’re the sort of person who reads about infinities you come to accept it. Or you get into arguments with mathematicians online and never know you’ve lost.

    Here’s where things get weird. How many bits of dust are there in this middle third set? It seems like it should be countable, the same size as the whole numbers. After all, we pick up some of these points every time we throw away a middle third. So we double the number of points left behind every time we throw away a middle third. That’s countable, right?

    It’s not. We can prove it. The proof looks uncannily like that of the Cantor Diagonal Argument. That’s the one that proves there are more real numbers than there are whole numbers. There are points in this leftover set that were not endpoints of any of these middle-third excerpts. This dust has more points in it than there are rational numbers, but it covers no length.

    (I don’t know if the dust has the same size as the real numbers. I suspect it’s unproved whether it has or hasn’t, because otherwise I’d surely be able to find the answer easily.)

    It’s got other neat properties. It’s a fractal, which is why someone might have heard of it, back in the Great Fractal Land Rush of the 80s and 90s. Look closely at part of this set and it looks like the original set, with bits of dust edging gaps of bigger and smaller sizes. It’s got a fractal dimension, or “Hausdorff dimension” in the lingo, that’s the logarithm of two divided by the logarithm of three. That’s a number actually known to be transcendental, which is reassuring. Nearly all numbers are transcendental, but we only know a few examples of them.

    HowardAt58 asked me about the Middle Third set, and that’s how I’ve referred to it here. It’s more often called the “Cantor set” or “Cantor comb”. The “comb” makes sense because if you draw successive middle-thirds-thrown-away, one after the other, you get something that looks kind of like a hair comb, if you squint.

    You can build sets like this that aren’t based around thirds. You can, for example, develop one by cutting lines into five chunks and throw away the second and fourth. You get results that are similar, and similarly heady, but different. They’re all astounding. They’re all hard to believe in yet. They may get to be stuff we just accept as part of how mathematics works.

     
  • Joseph Nebus 6:00 pm on Tuesday, 11 October, 2016 Permalink | Reply
    Tags: , , logic,   

    Reading the Comics, October 8, 2016: Split Week Edition Part 2 


    And now I can finish off last week’s comics. It was a busy week. The first few days of this week have been pretty busy too. Meanwhile, Dave Kingsbury has recently read a biography of Lewis Carroll, and been inspired to form a haiku/tanka project. You might enjoy.

    Susan Camilleri Konar is a new cartoonist for the Six Chix collective. Her first strip to get mentioned around these parts is from the 5th. It’s a casual mention of the Fibonacci sequence, which is one of the few sequences that a normal audience would recognize as something going on forever. And yes, I noticed the spiral in the background. That’s one of the common visual representations of the Fibonacci sequence: it starts from the center. The rectangles inside have dimensions 1 by 2, then 2 by 3, then 3 by 5, then 5 by 8, and so on; the spiral connects vertices of these rectangles. It’s an attractive spiral and you can derive the overrated Golden Ratio from the dimensions of larger rectangles. This doesn’t make the Golden Ratio important or anything, but it is there.

    'It seems like Fibonacci's been entering his password for days now.'

    Susan Camilleri Konar ‘s Six Chix for the 5th of October, 2016. And yet what distracts me is both how much food Fibonacci has on his desk and how much of it is hidden behind his computer where he can’t get at it. He’s going to end up spilling his coffee on something important fiddling around like that. And that’s not even getting at his computer being this weird angle relative to the walls.

    Ryan North’s Dinosaur Comics for the 6th is part of a story about T-Rex looking for certain truth. Mathematics could hardly avoid coming up. And it does offer what look like universal truths: given the way deductive logic works, and some starting axioms, various things must follow. “1 + 1 = 2” is among them. But there are limits to how much that tells us. If we accept the rules of Monopoly, then owning four railroads means the rent for landing on one is a game-useful $200. But if nobody around you cares about Monopoly, so what? And so it is with mathematics. Utahraptor and Dromiceiomimus point out that the mathematics we know is built on premises we have selected because we find them interesting or useful. We can’t know that the mathematics we’ve deduced has any particular relevance to reality. Indeed, it’s worse than North points out: How do we know whether an argument is valid? Because we believe that its conclusions follow from its premises according to our rules of deduction. We rely on our possibly deceptive senses to tell us what the argument even was. We rely on a mind possibly upset by an undigested bit of beef, a crumb of cheese, or a fragment of an underdone potato to tell us the rules are satisfied. Mathematics seems to offer us absolute truths, but it’s hard to see how we can get there.

    Rick Stromoskis Soup to Nutz for the 6th has a mathematics cameo in a student-resisting-class-questions problem. But the teacher’s question is related to the figure that made my first fame around these parts.

    Mark Anderson’s Andertoons for the 7th is the long-awaited Andertoon for last week. It is hard getting education in through all the overhead.

    Bill Watterson’s Calvin and Hobbes rerun for the 7th is a basic joke about Calvin’s lousy student work. Fun enough. Calvin does show off one of those important skills mathematicians learn, though. He does do a sanity check. He may not know what 12 + 7 and 3 + 4 are, but he does notice that 12 + 7 has to be something larger than 3 + 4. That’s a starting point. It’s often helpful before starting work on a problem to have some idea of what you think the answer should be.

     
    • davekingsbury 5:57 pm on Wednesday, 12 October, 2016 Permalink | Reply

      Thank you for the mention. Good advice about starting work on a problem knowing roughly what the answer is … though my post demonstrated the opposite!

      Like

      • Joseph Nebus 3:43 am on Saturday, 15 October, 2016 Permalink | Reply

        Quite welcome. And, well, usually having an idea what answer you expect helps. Sometimes it misfires, I admit. But all rules of thumb sometimes misfire. If your expectation misfires it’s probably because you expect the answer to be something that’s not just wrong, but wrong in a significant way. That is, not wrong because you’re thinking 12 when it should be 14, but rather wrong because you’re thinking 12 when you should be thinking of doughnut shapes. But figuring that out is another big learning experience.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Thursday, 7 July, 2016 Permalink | Reply
    Tags: , logic, , , , ,   

    Theorem Thursday: The Jordan Curve Theorem 


    There are many theorems that you have to get fairly far into mathematics to even hear of. Often they involve things that are so abstract and abstruse that it’s hard to parse just what we’re studying. This week’s entry is not one of them.

    The Jordan Curve Theorem.

    There are a couple of ways to write this. I’m going to fall back on the version that Richard Courant and Herbert Robbins put in the great book What Is Mathematics?. It’s a theorem in the field of topology, the study of how shapes interact. In particular it’s about simple, closed curves on a plane. A curve is just what you figure it should be. It’s closed if it … uh … closes, makes a complete loop. It’s simple if it doesn’t cross itself or have any disconnected bits. So, something you could draw without lifting pencil from paper and without crossing back over yourself. Have all that? Good. Here’s the theorem:

    A simple closed curve in the plane divides that plane into exactly two domains, an inside and an outside.

    It’s named for Camille Jordan, a French mathematician who lived from 1838 to 1922, and who’s renowned for work in group theory and topology. It’s a different Jordan from the one named in Gauss-Jordan Elimination, which is a matrix thing that’s important but tedious. It’s also a different Jordan from Jordan Algebras, which I remember hearing about somewhere.

    The Jordan Curve Theorem is proved by reading its proposition and then saying, “Duh”. This is compelling, although it lacks rigor. It’s obvious if your curve is a circle, or a slightly squished circle, or a rectangle or something like that. It’s less obvious if your curve is a complicated labyrinth-type shape.

    A labyrinth drawn in straight and slightly looped lines.

    A generic complicated maze shape. Can you pick out which part is the inside and which the outside? Pretend you don’t notice that little peninsula thing in the upper right corner. I didn’t mean the line to overlap itself but I was using too thick a brush in ArtRage and didn’t notice before I’d exported the image.

    It gets downright hard if the curve has a lot of corners. This is why a completely satisfying rigorous proof took decades to find. There are curves that are nowhere differentiable, that are nothing but corners, and those are hard to deal with. If you think there’s no such thing, then remember the Koch Snowflake. That’s that triangle sticking up from the middle of a straight line, that itself has triangles sticking up in the middle of its straight lines, and littler triangles still sticking up from the straight lines. Carry that on forever and you have a shape that’s continuous but always changing direction, and this is hard to deal with.

    Still, you can have a good bit of fun drawing a complicated figure, then picking a point and trying to work out whether it’s inside or outside the curve. The challenging way to do that is to view your figure as a maze and look for a path leading outside. The easy way is to draw a new line. I recommend doing that in a different color.

    In particular, draw a line from your target point to the outside. Some definitely outside point. You need the line to not be parallel to any of the curve’s line segments. And it’s easier if you don’t happen to intersect any vertices, but if you must, we’ll deal with that two paragraphs down.

    A dot with a testing line that crosses the labyrinth curve six times, and therefore is outside the curve.

    A red dot that turns out to be outside the labyrinth, based on the number of times the testing line, in blue, crosses the curve. I learned doing this that I should have drawn the dot and blue line first and then fit a curve around it so I wouldn’t have to work so hard to find one lousy point and line segment that didn’t have some problems.

    So draw your testing line here from the point to something definitely outside. And count how many times your testing line crosses the original curve. If the testing line crosses the original curve an even number of times then the original point was outside the curve. If the testing line crosses the original an odd number of times then the original point was inside of the curve. Done.

    If your testing line touches a vertex, well, then it gets fussy. It depends whether the two edges of the curve that go into that vertex stay on the same side as your testing line. If the original curve’s edges stay on the same side of your testing line, then don’t count that as a crossing. If the edges go on opposite sides of the testing line, then that does count as one crossing. With that in mind, carry on like you did before. An even number of crossings means your point was outside. An odd number of crossings means your point was inside.

    The testing line touches a corner of the curve. The curve comes up to and goes away from the same side as the testing line.

    This? Doesn’t count as the blue testing line crossing the black curve.


    The testing line touches a corner of the curve. The curve crosses over, with legs on either side of the testing line at that point.

    This? This counts as the blue testing line crossing the black curve.

    So go ahead and do this a couple times with a few labyrinths and sample points. It’s fun and elevates your doodling to the heights of 19th-century mathematics. Also once you’ve done that a couple times you’ve proved the Jordan curve theorem.

    Well, no, not quite. But you are most of the way to proving it for a special case. If the curve is a polygon, a shape made up of a finite number of line segments, then you’ve got almost all the proof done. You have to finish it off by choosing a ray, a direction, that isn’t parallel to any of the polygon’s line segments. (This is one reason this method only works for polygons, and fails for stuff like the Koch Snowflake. It also doesn’t work well with space-filling curves, which are things that exist. Yes, those are what they sound like: lines that squiggle around so much they fill up area. Some can fill volume. I swear. It’s fractal stuff.) Imagine all the lines that are parallel to that ray. There’s definitely some point along that line that’s outside the curve. You’ll need that for reference. Classify all the points on that line by whether there’s an even or an odd number of crossings between a starting point and your reference definitely-outside point. Keep doing that for all these many parallel lines.

    And that’s it. The mess of points that have an odd number of intersections are the inside. The mess of points that have an even number of intersections are the outside.

    You won’t be surprised to know there’s versions of the Jordan curve theorem for solid objects in three-dimensional space. And for hyperdimensional spaces too. You can always work out an inside and an outside, as long as space isn’t being all weird. But it might sound like it’s not much of a theorem. So you can work out an inside and an outside; so what?

    But it’s one of those great utility theorems. It pops in to places, the perfect tool for a problem you were just starting to notice existed. If I can get my rhetoric organized I hope to show that off next week, when I figure to do the Five-Color Map Theorem.

     
    • howardat58 7:00 pm on Thursday, 7 July, 2016 Permalink | Reply

      Richard Courant and Herbert Robbins: What Is Mathematics?.

      My bedside book, since 1961.

      Liked by 2 people

      • Joseph Nebus 4:10 am on Saturday, 9 July, 2016 Permalink | Reply

        I’d first read it as an undergraduate and it was one of my first online book purchases. I do keep dipping into it and finding things I feel like I should write about here. But then I have to think of something to add to it. In my case, that’s jokes, mostly.

        Like

    • mathtuition88 4:45 am on Friday, 8 July, 2016 Permalink | Reply

      Very interesting. Jordan Curve Theorem shows the rigor of math in action.

      Like

      • Joseph Nebus 4:15 am on Saturday, 9 July, 2016 Permalink | Reply

        I like it for being the sort of theorem that seems too obvious to be useful. I have got it scheduled to be used in next Thursday’s post.

        Liked by 1 person

    • Mark Jackson 12:18 am on Sunday, 17 July, 2016 Permalink | Reply

      “You won’t be surprised to know there’s versions of the Jordan curve theorem for solid objects in three-dimensional space.” Not that I ought to doubt this, but the counterintuitive discovery that the 3-sphere can be everted sprang to mind, and now I’m worried.

      Like

      • Joseph Nebus 4:42 pm on Wednesday, 20 July, 2016 Permalink | Reply

        It’s a good worry and I’ll admit this is getting deeper into topology than I’m trained in. My suspicion is that the possible self-intersections of a sphere being turned inside-out cause it to fall outside the bounds of the Jordan-Brouwer Separation Theorem. I don’t have a good argument that has to be the case though; that’s just where I would start looking.

        Like

  • Joseph Nebus 3:00 pm on Thursday, 30 June, 2016 Permalink | Reply
    Tags: , factorials, , logic, , , , ,   

    Theorem Thursday: Liouville’s Approximation Theorem And How To Make Your Own Transcendental Number 


    As I get into the second month of Theorem Thursdays I have, I think, the whole roster of weeks sketched out. Today, I want to dive into some real analysis, and the study of numbers. It’s the sort of thing you normally get only if you’re willing to be a mathematics major. I’ll try to be readable by people who aren’t. If you carry through to the end and follow directions you’ll have your very own mathematical construct, too, so enjoy.

    Liouville’s Approximation Theorem

    It all comes back to polynomials. Of course it does. Polynomials aren’t literally everything in mathematics. They just come close. Among the things we can do with polynomials is divide up the real numbers into different sets. The tool we use is polynomials with integer coefficients. Integers are the positive and the negative whole numbers, stuff like ‘4’ and ‘5’ and ‘-12’ and ‘0’.

    A polynomial is the sum of a bunch of products of coefficients multiplied by a variable raised to a power. We can use anything for the variable’s name. So we use ‘x’. Sometimes ‘t’. If we want complex-valued polynomials we use ‘z’. Some people trying to make a point will use ‘y’ or ‘s’ but they’re just showing off. Coefficients are just numbers. If we know the numbers, great. If we don’t know the numbers, or we want to write something that doesn’t commit us to any particular numbers, we use letters from the start of the alphabet. So we use ‘a’, maybe ‘b’ if we must. If we need a lot of numbers, we use subscripts: a0, a1, a2, and so on, up to some an for some big whole number n. To talk about one of these without committing ourselves to a specific example we use a subscript of i or j or k: aj, ak. It’s possible that aj and ak equal each other, but they don’t have to, unless j and k are the same whole number. They might also be zero, but they don’t have to be. They can be any numbers. Or, for this essay, they can be any integers. So we’d write a generic polynomial f(x) as:

    f(x) = a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \cdots + a_{n - 1}x^{n - 1} + a_n x^n

    (Some people put the coefficients in the other order, that is, a_n + a_{n - 1}x + a_{n - 2}x^2 and so on. That’s not wrong. The name we give a number doesn’t matter. But it makes it harder to remember what coefficient matches up with, say, x14.)

    A zero, or root, is a value for the variable (‘x’, or ‘t’, or what have you) which makes the polynomial equal to zero. It’s possible that ‘0’ is a zero, but don’t count on it. A polynomial of degree n — meaning the highest power to which x is raised is n — can have up to n different real-valued roots. All we’re going to care about is one.

    Rational numbers are what we get by dividing one whole number by another. They’re numbers like 1/2 and 5/3 and 6. They’re numbers like -2.5 and 1.0625 and negative a billion. Almost none of the real numbers are rational numbers; they’re exceptional freaks. But they are all the numbers we actually compute with, once we start working out digits. Thus we remember that to live is to live paradoxically.

    And every rational number is a root of a first-degree polynomial. That is, there’s some polynomial f(x) = a_0 + a_1 x that’s made zero for your polynomial. It’s easy to tell you what it is, too. Pick your rational number. You can write that as the integer p divided by the integer q. Now look at the polynomial f(x) = p – q x. Astounded yet?

    That trick will work for any rational number. It won’t work for any irrational number. There’s no first-degree polynomial with integer coefficients that has the square root of two as a root. There are polynomials that do, though. There’s f(x) = 2 – x2. You can find the square root of two as the zero of a second-degree polynomial. You can’t find it as the zero of any lower-degree polynomials. So we say that this is an algebraic number of the second degree.

    This goes on higher. Look at the cube root of 2. That’s another irrational number, so no first-degree polynomials have it as a root. And there’s no second-degree polynomials that have it as a root, not if we stick to integer coefficients. Ah, but f(x) = 2 – x3? That’s got it. So the cube root of two is an algebraic number of degree three.

    We can go on like this, although I admit examples for higher-order algebraic numbers start getting hard to justify. Most of the numbers people have heard of are either rational or are order-two algebraic numbers. I can tell you truly that the eighth root of two is an eighth-degree algebraic number. But I bet you don’t feel enlightened. At best you feel like I’m setting up for something. The number r(5), the smallest radius a disc can have so that five of them will completely cover a disc of radius 1, is eighth-degree and that’s interesting. But you never imagined the number before and don’t have any idea how big that is, other than “I guess that has to be smaller than 1”. (It’s just a touch less than 0.61.) I sound like I’m wasting your time, although you might start doing little puzzles trying to make smaller coins cover larger ones. Do have fun.

    Liouville’s Approximation Theorem is about approximating algebraic numbers with rational ones. Almost everything we ever do is with rational numbers. That’s all right because we can make the difference between the number we want, even if it’s r(5), and the numbers we can compute with, rational numbers, as tiny as we need. We trust that the errors we make from this approximation will stay small. And then we discover chaos science. Nothing is perfect.

    For example, suppose we need to estimate π. Everyone knows we can approximate this with the rational number 22/7. That’s about 3.142857, which is all right but nothing great. Some people know we can approximate it as 333/106. (I didn’t until I started writing this paragraph and did some research.) That’s about 3.141509, which is better. Then there’s 355/113, which is not as famous as 22/7 but is a celebrity compared to 333/106. That’s about 3.141529. Then we get into some numbers only mathematics hipsters know: 103993/33102 and 104348/33215 and so on. Fine.

    The Liouville Approximation Theorem is about sequences that converge on an irrational number. So we have our first approximation x1, that’s the integer p1 divided by the integer q1. So, 22 and 7. Then there’s the next approximation x2, that’s the integer p2 divided by the integer q2. So, 333 and 106. Then there’s the next approximation yet, x3, that’s the integer p3 divided by the integer q3. As we look at more and more approximations, xj‘s, we get closer and closer to the actual irrational number we want, in this case π. Also, the denominators, the qj‘s, keep getting bigger.

    The theorem speaks of having an algebraic number, call it x, of some degree n greater than 1. Then we have this limit on how good an approximation can be. The difference between the number x that we want, and our best approximation p / q, has to be larger than the number (1/q)n + 1. The approximation might be higher than x. It might be lower than x. But it will be off by at least the n-plus-first power of 1/q.

    Polynomials let us separate the real numbers into infinitely many tiers of numbers. They also let us say how well the most accessible tier of numbers, rational numbers, can approximate these more exotic things.

    One of the things we learn by looking at numbers through this polynomial screen is that there are transcendental numbers. These are numbers that can’t be the root of any polynomial with integer coefficients. π is one of them. e is another. Nearly all numbers are transcendental. But the proof that any particular number is one is hard. Joseph Liouville showed that transcendental numbers must exist by using continued fractions. But this approximation theorem tells us how to make our own transcendental numbers. This won’t be any number you or anyone else has ever heard of, unless you pick a special case. But it will be yours.

    You will need:

    1. a1, an integer from 1 to 9, such as ‘1’, ‘9’, or ‘5’.
    2. a2, another integer from 1 to 9. It may be the same as a1 if you like, but it doesn’t have to be.
    3. a3, yet another integer from 1 to 9. It may be the same as a1 or a2 or, if it so happens, both.
    4. a4, one more integer from 1 to 9 and you know what? Let’s summarize things a bit.
    5. A whopping great big gob of integers aj, every one of them from 1 to 9, for every possible integer ‘j’ so technically this is infinitely many of them.
    6. Comfort with the notation n!, which is the factorial of n. For whole numbers that’s the product of every whole number from 1 to n, so, 2! is 1 times 2, or 2. 3! is 1 times 2 times 3, or 6. 4! is 1 times 2 times 3 times 4, or 24. And so on.
    7. Not to be thrown by me writing -n!. By that I mean work out n! and then multiply that by -1. So -2! is -2. -3! is -6. -4! is -24. And so on.

    Now, assemble them into your very own transcendental number z, by this formula:

    z = a_1 \cdot 10^{-1} + a_2 \cdot 10^{-2!} + a_3 \cdot 10^{-3!} + a_4 \cdot 10^{-4!} + a_5 \cdot 10^{-5!} + a_6 \cdot 10^{-6!} \cdots

    If you’ve done it right, this will look something like:

    z = 0.a_{1}a_{2}000a_{3}00000000000000000a_{4}0000000 \cdots

    Ah, but, how do you know this is transcendental? We can prove it is. The proof is by contradiction, which is how a lot of great proofs are done. We show nonsense follows if the thing isn’t true, so the thing must be true. (There are mathematicians that don’t care for proof-by-contradiction. They insist on proof by charging straight ahead and showing a thing is true directly. That’s a matter of taste. I think every mathematician feels that way sometimes, to some extent or on some issues. The proof-by-contradiction is easier, at least in this case.)

    Suppose that your z here is not transcendental. Then it’s got to be an algebraic number of degree n, for some finite number n. That’s what it means not to be transcendental. I don’t know what n is; I don’t care. There is some n and that’s enough.

    Now, let’s let zm be a rational number approximating z. We find this approximation by taking the first m! digits after the decimal point. So, z1 would be just the number 0.a1. z2 is the number 0.a1a2. z3 is the number 0.a1a2000a3. I don’t know what m you like, but that’s all right. We’ll pick a nice big m.

    So what’s the difference between z and zm? Well, it can’t be larger than 10 times 10-(m + 1)!. This is for the same reason that π minus 3.14 can’t be any bigger than 0.01.

    Now suppose we have the best possible rational approximation, p/q, of your number z. Its first m! digits are going to be p / 10m!. This will be zm And by the Liouville Approximation Theorem, then, the difference between z and zm has to be at least as big as (1/10m!)(n + 1).

    So we know the difference between z and zm has to be larger than one number. And it has to be smaller than another. Let me write those out.

    \frac{1}{10^{m! (n + 1)}} < |z - z_m | < \frac{10}{10^{(m + 1)!}}

    We don’t need the z – zm anymore. That thing on the rightmost side we can write what I’ll swear is a little easier to use. What we have left is:

    \frac{1}{10^{m! (n + 1)}} < \frac{1}{10^{(m + 1)! - 1}}

    And this will be true whenever the number m! (n + 1) is greater than (m + 1)! – 1 for big enough numbers m.

    But there’s the thing. This isn’t true whenever m is greater than n. So the difference between your alleged transcendental number and its best-possible rational approximation has to be simultaneously bigger than a number and smaller than that same number without being equal to it. Supposing your number is anything but transcendental produces nonsense. Therefore, congratulations! You have a transcendental number.

    If you chose all 1’s for your aj‘s, then you have what is sometimes called the Liouville Constant. If you didn’t, you may have a transcendental number nobody’s ever noticed before. You can name it after someone if you like. That’s as meaningful as naming a star for someone and cheaper. But you can style it as weaving someone’s name into the universal truth of mathematics. Enjoy!

    I’m glad to finally give you a mathematics essay that lets you make something you can keep.

     
    • Andrew Wearden 3:29 pm on Thursday, 30 June, 2016 Permalink | Reply

      Admittedly, I do have an undergrad math degree, but I thought you did a good job explaining this. Out of curiosity, is there a reason you can’t use the integer ‘0’ when creating a transcendental number?

      Liked by 1 person

      • Joseph Nebus 6:45 am on Sunday, 3 July, 2016 Permalink | Reply

        Thank you. I’m glad you followed.

        If I’m not missing a trick there’s no reason you can’t slip a couple of zeroes in to the transcendental number. But there is a problem if you have nothing but zeroes after some point. If, say, everything from a_9 on were zero, then you’d have a rational number, which is as un-transcendental as it gets. So it’s easier to build a number without electing zeroes rather than work out a rule that allows zeroes only in non-dangerous configurations.

        Like

  • Joseph Nebus 3:00 pm on Tuesday, 14 June, 2016 Permalink | Reply
    Tags: , , , , logic,   

    What’s The Shortest Proof I’ve Done? 


    I didn’t figure to have a bookend for last week’s “What’s The Longest Proof I’ve Done? question. I don’t keep track of these things, after all. And the length of a proof must be a fluid concept. If I show something is a direct consequence of a previous theorem, is the proof’s length the two lines of new material? Or is it all the proof of the previous theorem plus two new lines?

    I would think the shortest proof I’d done was showing that the logarithm of 1 is zero. This would be starting from the definition of the natural logarithm of a number x as the definite integral of 1/t on the interval from 1 to x. But that requires a bunch of analysis to support the proof. And the Intermediate Value Theorem. Does that stuff count? Why or why not?

    But this happened to cross my desk: The Shortest-Known Paper Published in a Serious Math Journal: Two Succinct Sentences, an essay by Dan Colman. It reprints a paper by L J Lander and T R Parkin which appeared in the Bulletin of the American Mathematical Society in 1966.

    It’s about Euler’s Sums of Powers Conjecture. This is a spinoff of Fermat’s Last Theorem. Leonhard Euler observed that you need at least two whole numbers so that their squares add up to a square. And you need three cubes of whole numbers to add up to the cube of a whole number. Euler speculated you needed four whole numbers so that their fourth powers add up to a fourth power, five whole numbers so that their fifth powers add up to a fifth power, and so on.

    And it’s not so. Lander and Parkin found that this conjecture is false. They did it the new old-fashioned way: they set a computer to test cases. And they found four whole numbers whose fifth powers add up to a fifth power. So the quite short paper answers a long-standing question, and would be hard to beat for accessibility.

    There is another famous short proof sometimes credited as the most wordless mathematical presentation. Frank Nelson Cole gave it on the 31st of October, 1903. It was about the Mersenne number 267-1, or in human notation, 147,573,952,589,676,412,927. It was already known the number wasn’t prime. (People wondered because numbers of the form 2n-1 often lead us to perfect numbers. And those are interesting.) But nobody knew which factors it was. Cole gave his talk by going up to the board, working out 267-1, and then moving to the other side of the board. There he wrote out 193,707,721 × 761,838,257,287, and showed what that was. Then, per legend, he sat down without ever saying a word, and took in the standing ovation.

    I don’t want to cast aspersions on a great story like that. But mathematics is full of great stories that aren’t quite so. And I notice that one of Cole’s doctoral students was Eric Temple Bell. Bell gave us a great many tales of mathematics history that are grand and great stories that just weren’t so. So I want it noted that I don’t know where we get this story from, or how it may have changed in the retellings. But Cole’s proof is correct, at least according to Octave.

    So not every proof is too long to fit in the universe. But then I notice that Mathworld’s page regarding the Euler Sum of Powers Conjecture doesn’t cite the 1966 paper. It cites instead Lander and Parkin’s “A Counterexample to Euler’s Sum of Powers Conjecture” from Mathematics of Computation volume 21, number 97, of 1967. There the paper has grown to three pages, although it’s only a couple paragraphs of one page and three lines of citation on the third. It’s not so easy to read either, but it does explain how they set about searching for counterexamples. But it may give you some better idea of how numerical mathematicians find things.

     
  • Joseph Nebus 3:00 pm on Tuesday, 7 June, 2016 Permalink | Reply
    Tags: , logic, , ,   

    What’s The Longest Proof I’ve Done? 


    You know what’s a question I’m surprised I don’t get asked? I mean in the context of being a person with an advanced mathematics degree. I don’t get asked what’s the longest proof I’ve ever done. Either just reading to understand, or proving for myself. Maybe people are too intimidated by the idea of advanced mathematics to try asking such things. Maybe they’re afraid I’d bury them under a mountain of technical details. But I’d imagine musicians get asked what the hardest or the longest piece they’ve memorized is. I’m sure artists get asked what’s the painting (or sculpture, or whatnot) they’ve worked on the longest was.

    It’s just as well nobody’s asked. I’m not sure what the longest proof I’ve done, or gone through, would even be. Some of it is because there’s an inherent arbitrariness to the concept of “a proof”. Proofs are arguments, and they’re almost always made up of many smaller pieces. The advantage of making these small pieces is that small proofs are usually easier to understand. We can then assemble the conclusions of many small proofs to make one large proof. But then how long was the large proof? Does it contain all the little proofs that go into it?

    And, truth be told, I didn’t think to pay attention to how long any given proof was. If I had to guess I would think the longest proof I’d done, just learned, would be from a grad school course in ordinary differential equations. This is the way we study systems in which how things are changing depends on what things are now. These often match physical, dynamic, systems very well. I remember in the class spending several two-hour sessions trying to get through a major statement in a field called Kolmogorov-Arnold-Moser Theory. This is a major statement about dynamical systems being perturbed, given a little shove. And it describes what conditions make the little shove really change the way the whole system behaves.

    What I’m getting to is that there appears to be a new world’s record-holder for the Longest Actually Completed Proof. It’s about a problem I never heard of before but that’s apparently been open since the 1980s. It’s known as the Boolean Pythagorean Triples problem. The MathsByAGirl blog has an essay about it, and gives some idea of its awesome size. It’s about 200 terabytes of text. As you might imagine, it’s a proof by exhaustion. That is, it divides up a problem into many separate cases, and tries out all the cases. That’s a legitimate approach. It tends to produce proofs that are long and easy to verify, at least at each particular case. They might not be insightful, that is, they might not suggest new stuff to do, but they work. (And I don’t know that this proof doesn’t suggest new stuff to do. I haven’t read it, for good reason. It’s well outside my specialty.)

    But proofs can be even bigger. John Carlos Baez published a while back an essay, “Insanely Long Proofs”. And that’s awe-inspiring. Baez is able to provide theorems which we know to be true. You’ll be able to understand what they conclude, too. And in the logic system applicable to them, their proofs would be so long that the entire universe isn’t big enough just to write down the number of symbols needed to complete the proof. Let me say that again. It’s not that writing out the proof would take more than all the space in the universe. It’s that writing out how long the proof would be, written out would take more than all the space in the universe.

    So you should ask, then how do we know it’s true? Baez explains.

     
    • MJ Howard 3:21 pm on Tuesday, 7 June, 2016 Permalink | Reply

      I think part of the problem is that, in general, non-mathematicians don’t have much of a concept of what working mathematicians actually do. Most of the work I do that I think of as Mathematics consists of specific applications and isn’t terribly concerned with proof as such.

      That said, the most time I spent working on a proof was as an undergrad. It was a plane tiling problem involving constraints on the dimensions of the plane. I spent about a week and a half on it and only managed to prove sufficiency.

      Liked by 1 person

      • Joseph Nebus 3:08 am on Saturday, 11 June, 2016 Permalink | Reply

        You’re right. It might also be that people don’t think much about what mathematicians do all day. I’m not perfectly clear on it myself, I must admit. But when I was a real working mathematician most of my research was really numerical simulations and experiments. There were a couple of little cases where I needed to prove something, but it was all in the service of either saying why my numerical experiments should work, or why a surprising result I’d found experimentally actually made sense after all.

        My biggest work in actually coming up with proofs might have been in a real analysis course I took as a grad student. I’d had a lovely open-ended assignment and kept chaining together little proofs about one problem to build a notebook of stuff. This was all proofs about logarithms and exponentials, so none of the results were anything remotely new or surprising, but it was really satisfying to get underneath some computation rules and work them out.

        Like

    • Amie 10:26 pm on Tuesday, 7 June, 2016 Permalink | Reply

      I’m more likely to be asked ‘what is the longest equation that I’ve solved’? :)

      Related to what MJ said, when I interview high-school students for undergraduate maths scholarships, I ask them what is the longest they have ever spent solving a problem. The answer is usually 15 minutes. Occasionally someone says overnight. That gives us one clue as to what non-mathematicians (albeit maths students) think it means to be good at maths and how mathematicians work (that is, solve problems relatively quickly and move on). To be fair, I don’t expect these students to answer any differently because they respond based on (1) their experience and (2) what they think we want to hear. But it is illuminating.

      I’d never heard of the Boolean Pythagorean Triples problem until earlier this week, either. I love that there are easy to understand maths ideas that I’ve never heard of. No idea how the proof works either ;).

      Liked by 1 person

      • Joseph Nebus 3:15 am on Saturday, 11 June, 2016 Permalink | Reply

        Longest equation that I’ve solved … hm. Well, if it’s the equation I spent the longest time in solving that’s got to be something in the inviscid fluid flow that made up a lot of my thesis. The physically longest equation I don’t know. I remember shortly after starting into high school algebra at all trying to think of the hardest possible equation. Given that all I really had to work with was polynomials my first guess was just something with a bunch of variables all raised to high powers. But I also worked out that this was a boring equation. Never did work out what would be both complicated and interesting at once.

        I wonder how long non-mathematicians expect gets spent on leads that ultimately go nowhere, before a workable approach to the problem is worked out. Or if not nowhere then at least go into directions that don’t work without a lot of re-thinking and re-casting. There is a desire to show how to get right answers efficiently, for which people can’t be blamed. But the system of learning how to think of ways to get answers probably needs false starts and long periods of pondering that feel like they don’t get anywhere.

        Liked by 1 person

    • mathsbyagirl 7:54 am on Saturday, 11 June, 2016 Permalink | Reply

      I must say, I love your style of writing!

      Like

  • Joseph Nebus 3:00 pm on Sunday, 5 June, 2016 Permalink | Reply
    Tags: , , logic,   

    Reading the Comics, June 3, 2016: Word Problems Without Pictures Edition 


    I haven’t got Sunday’s comics under review yet. But the past seven days were slow ones for mathematically-themed comics. Maybe Comic Strip Master Command is under the impression that it’s the (United States) summer break already. It’s not, although Funky Winkerbean did a goofy sequence graduating its non-player-character students. And Zits has been doing a summer reading storyline that only makes sense if Jeremy Duncan is well into summer. Maybe Comic Strip Master Command thinks it’s a month later than it actually is?

    Tony Cochrane’s Agnes for the 29th of May looks at first like a bit of nonsense wordplay. But whether a book with the subject “All About Books” would discuss itself, and how it would discuss itself, is a logic problem. And not just a logic problem. Start from pondering how the book All About Books would describe the content of itself. You can go from that to an argument that it’s impossible to compress every possible message. Imagine an All About Books which contained shorthand descriptions of every book. And the descriptions have enough detail to exactly reconstruct each original book. But then what would the book list for the description of All About Books?

    And self-referential things can lead to logic paradoxes swiftly. You’d have some fine ones if Agnes were to describe a book All About Not-Described Books. Is the book described in itself? The question again sounds silly. But thinking seriously about it leads us to the decidability problem. Any interesting-enough logical system will always have statements that are meaningful and true that no one can prove.

    Furthermore, the suggestion of an “All About `All About Books’ Book” suggests to me power sets. That’s the set of all the ways you can collect the elements of a set. Power sets are always bigger than the original set. They lead to the staggering idea that there are many sizes of infinitely large sets, a never-ending stack of bigness.

    Robb Armstrong’s Jump Start for the 31st of May is part of a sequence about getting a tutor for a struggling kid. That it’s mathematics is incidental to the storyline, must be said. (It’s an interesting storyline, partly about the Jojo’s father, a police officer, coming to trust Ray, an ex-convict. Jump Start tells many interesting and often deeply weird storylines. And it never loses its camouflage of being an ordinary family comic strip.) It uses the familiar gimmick of motivating a word problem by making it about something tangible.

    Ken Cursoe’s Tiny Sepuku for the 2nd of June uses the motif of non-Euclidean geometry as some supernatural magic. It’s a small reference, you might miss it. I suppose it is true that a high-dimensional analogue to conic sections would focus things from many dimensions. If those dimensions match time and space, maybe it would focus something from all humanity into the brain. I would try studying instead, though.

    Russell Myers’s Broom Hilda for the 3rd is a resisting-the-word-problems joke. It’s funny to figure on missing big if you have to be wrong at all. But something you learn in numerical mathematics, particularly, is that it’s all right to start from a guess. Often you can take a wrong answer and improve it. If you can’t get the exact right answer, you can usually get a better answer. And often you can get as good as you need. So in practice, sorry to say, I can’t recommend going for the ridiculous answer. You can do better.

     
    • seaangel4444 4:41 pm on Sunday, 5 June, 2016 Permalink | Reply

      LOL I love the Broom Hilda cartoon, Joseph! And here I am, “smiling”! :) Cher xo

      Like

      • Joseph Nebus 2:56 am on Saturday, 11 June, 2016 Permalink | Reply

        Aw, quite glad you like. I do enjoy doing these comic strip reviews, partly for the chance to talk about subjects, partly because people get to see strips they hadn’t noticed before.

        Liked by 1 person

  • Joseph Nebus 3:00 pm on Wednesday, 20 April, 2016 Permalink | Reply
    Tags: , boredom, , , logic, ,   

    A Leap Day 2016 Mathematics A To Z: Wlog 


    Wait for it.

    Wlog.

    I’d like to say a good word for boredom. It needs the good words. The emotional state has an appalling reputation. We think it’s the sad state someone’s in when they can’t find anything interesting. It’s not. It’s the state in which we are so desperate for engagement that anything is interesting enough.

    And that isn’t a bad thing! Finding something interesting enough is a precursor to noticing something curious. And curiosity is a precursor to discovery. And discovery is a precursor to seeing a fuller richness of the world.

    Think of being stuck in a waiting room, deprived of reading materials or a phone to play with or much of anything to do. But there is a clock. Your classic analog-face clock. Its long minute hand sweeps out the full 360 degrees of the circle once every hour, 24 times a day. Its short hour hand sweeps out that same arc every twelve hours, only twice a day. Why is the big unit of time marked with the short hand? Good question, I don’t know. Probably, ultimately, because it changes so much less than the minute hand that it doesn’t need the attention of length drawn to it.

    But let our waiting mathematician get a little more bored, and think more about the clock. The hour and minute hand must sometimes point in the same direction. They do at 12:00 by the clock, for example. And they will at … a little bit past 1:00, and a little more past 2:00, and a good while after 9:00, and so on. How many times during the day will they point the same direction?

    Well, one easy way to do this is to work out how long it takes the hands, once they’ve met, to meet up again. Presumably we don’t want to wait the whole hour-and-some-more-time for it. But how long is that? Well, we know the hands start out pointing the same direction at 12:00. The first time after that will be after 1:00. At exactly 1:00 the hour hand is 30 degrees clockwise of the minute hand. The minute hand will need five minutes to catch up to that. In those five minutes the hour hand will have moved another 2.5 degrees clockwise. The minute hand needs about four-tenths of a minute to catch up to that. In that time the hour hand moves — OK, we’re starting to see why Zeno was not an idiot. He never was.

    But we have this roughly worked out. It’s about one hour, five and a half minutes between one time the hands meet and the next. In the course of twelve hours there’ll be time for them to meet up … oh, of course, eleven times. Over the course of the day they’ll meet up 22 times and we can get into a fight over whether midnight counts as part of today, tomorrow, or both days, or neither. (The answer: pretend the day starts at 12:01.)

    Hold on, though. How do we know that the time between the hands meeting up at 12:00 and the one at about 1:05 is the same as the time between the hands meeting up near 1:05 and the next one, sometime a little after 2:10? Or between that one and the one at a little past 3:15? What grounds do we have for saying this one interval is a fair representation of them all?

    We can argue that it should be fairly enough. Imagine that all the markings were washed off the clock. It’s just two hands sweeping around in circles, one relatively fast, one relatively slow, forever. Give the clockface a spin. When the hands come together again rotate the clock so those two hands are vertical, the “12:00” position. Is this actually 12:00? … Well, we’ve got a one-in-eleven chance it is. It might be a little past 1:05; it might be that time something past 6:30. The movement of the clock hands gives no hint what time it really is.

    And that is why we’re justified taking this one interval as representative of them all. The rate at which the hands move, relative to each other, doesn’t depend on what the clock face behind it says. The rate is, if the clock isn’t broken, always the same. So we can use information about one special case that happens to be easy to work out to handle all the cases.

    That’s the mathematics term for this essay. We can study the one specific case without loss of generality, or as it’s inevitably abbreviated, wlog. This is the trick of studying something possibly complicated, possibly abstract, by looking for a representative case. That representative case may tell us everything we need to know, at least about this particular problem. Generality means what you might figure from the ordinary English meaning of it: it means this answer holds in general, as opposed to in this specific instance.

    Some thought has to go in to choosing the representative case. We have to pick something that doesn’t, somehow, miss out on a class of problems we would want to solve. We mustn’t lose the generality. And it’s an easy mistake to make, especially as a mathematics student first venturing into more abstract waters. I remember coming up against that often when trying to prove properties of infinitely long series. It’s so hard to reason something about a bunch of numbers whose identities I have no idea about; why can’t I just use the sequence, oh, 1/1, 1/2, 1/3, 1/4, et cetera and let that be good enough? Maybe 1/1, 1/4, 1/9, 1/16, et cetera for a second test, just in case? It’s because it takes time to learn how to safely handle infinities.

    It’s still worth doing. Few of us are good at manipulating things in the abstract. We have to spend more mental energy imagining the thing rather than asking the questions we want of it. Reducing that abstraction — even if it’s just a little bit, changing, say, from “an infinitely-differentiable function” to “a polynomial of high enough degree” — can rescue us. We can try out things we’re confident we understand, and derive from it things we don’t know.

    I can’t say that a bored person observing a clock would deduce all this. Parts of it, certainly. Maybe all, if she thought long enough. I believe it’s worth noticing and thinking of these kinds of things. And it’s why I believe it’s fine to be bored sometimes.

     
    • howardat58 3:33 pm on Wednesday, 20 April, 2016 Permalink | Reply

      Your point about how mathematicians think is so vital and so overlooked in the teaching of math in schools. Even the question “Is it true in a special case?” is a question rarely asked.

      Liked by 1 person

      • Joseph Nebus 2:13 am on Friday, 22 April, 2016 Permalink | Reply

        Well, thank you. While writing this I did get to thinking about how we find things that can be picked out without loss of generality, versus actually losing generality. And I didn’t think of a good example of losing generality, partly because I’m writing these much closer to deadline than I imagined and partly because I thought I was running long as it was.

        I might put a follow-up post on about how to pick examples, though.

        Liked by 2 people

  • Joseph Nebus 3:00 pm on Thursday, 14 April, 2016 Permalink | Reply
    Tags: , flash cards, , , logic, , , ,   

    Reading the Comics, April 10, 2016: Four-Digit Prime Number Edition 


    In today’s installment of Reading The Comics, mathematics gets name-dropped a bunch in strips that aren’t really about my favorite subject other than my love. Also, I reveal the big lie we’ve been fed about who drew the Henry comic strip attributed to Carl Anderson. Finally, I get a question from Queen Victoria. I feel like this should be the start of a podcast.

    Todd responds to arithmetic flash cards: 'Tater tots! Sloppy Joes! Mac and Cheese!' 'Todd, what are you doing? These are all math!' 'Sorry ... every day at school we have math right before lunch and you told me to say the first thing that pops into my mind!'

    Patrick Roberts’ Todd the Dinosaur for the 6th of April, 2016.

    Patrick Roberts’ Todd the Dinosaur for the 6th of April just name-drops mathematics. The flash cards suggest it. They’re almost iconic for learning arithmetic. I’ve seen flash cards for other subjects. But apart from learning the words of other languages I’ve never been able to make myself believe they’d work. On the other hand, I haven’t used flash cards to learn (or teach) things myself.

    Mom, taking the mathematics book away from Bad Dad: 'I'll take over now ... fractions and long division aren't `scientifically accepted as unknowable`.'

    Joe Martin’s Boffo for the 7th of April, 2016. I bet the link expires in early May.

    Joe Martin’s Boffo for the 7th of April is a solid giggle. (I have a pretty watery giggle myself.) There are unknowable, or at least unprovable, things in mathematics. Any logic system with enough rules to be interesting has ideas which would make sense, and which might be true, but which can’t be proven. Arithmetic is such a system. But just fractions and long division by itself? No, I think we need something more abstract for that.

    Henry is sent to bed. He can't sleep until he reads from his New Math text.

    Carl Anderson’s Henry for the 7th of April, 2016.

    Carl Anderson’s Henry for the 7th of April is, of course, a rerun. It’s also a rerun that gives away that the “Carl Anderson” credit is a lie. Anderson turned over drawing the comic strip in 1942 to John Liney, for weekday strips, and Don Trachte for Sundays. There is no possible way the phrase “New Math” appeared on the cover of a textbook Carl Anderson drew. Liney retired in 1979, and Jack Tippit took over until 1983. Then Dick Hodgins, Jr, drew the strip until 1990. So depending on how quickly word of the New Math penetrated Comic Strip Master Command, this was drawn by either Liney, Tippit, or possibly Hodgins. (Peanuts made New Math jokes in the 60s, but it does seem the older the comic strip the longer it takes to mention new stuff.) I don’t know when these reruns date from. I also don’t know why Comics Kingdom is fibbing about the artist. But then they went and cancelled The Katzenjammer Kids without telling anyone either.

    Eric the Circle for the 8th, this one by “lolz”, declares that Eric doesn’t like being graphed. This is your traditional sort of graph, one in which points with coordinates x and y are on the plot if their values make some equation true. For a circle, that equation’s something like (x – a)2 + (y – b)2 = r2. Here (a, b) are the coordinates for the point that’s the center of the circle, and r is the radius of the circle. This looks a lot like Eric is centered on the origin, the point with coordinates (0, 0). It’s a popular choice. Any center is as good. Another would just have equations that take longer to work with.

    Richard Thompson’s Cul de Sac rerun for the 10th is so much fun to look at that I’m including it even though it just name-drops mathematics. The joke would be the same if it were something besides fractions. Although see Boffo.

    Norm Feuti’s Gil rerun for the 10th takes on mathematics’ favorite group theory application, the Rubik’s Cube. It’s the way I solved them best. This approach falls outside the bounds of normal group theory, though.

    Mac King and Bill King’s Magic in a Minute for the 10th shows off a magic trick. It’s also a non-Rubik’s-cube problem in group theory. One of the groups that a mathematics major learns, after integers-mod-four and the like, is the permutation group. In this, the act of swapping two (or more) things is a thing. This puzzle restricts the allowed permutations down to swapping one item with the thing next to it. And thanks to that, an astounding result emerges. It’s worth figuring out why the trick would work. If you can figure out the reason the first set of switches have to leave a penny on the far right then you’ve got the gimmick solved.

    Pab Sungenis’s New Adventures of Queen Victoria for the 10th made me wonder just how many four-digit prime numbers there are. If I haven’t worked this out wrong, there’s 1,061 of them.

     
  • Joseph Nebus 3:00 pm on Monday, 14 March, 2016 Permalink | Reply
    Tags: , formal language, , , , logic,   

    A Leap Day 2016 Mathematics A To Z: Grammar 


    My next entry for this A To Z was another request, this one from Jacob Kanev, who doesn’t seem to have a WordPress or other blog. (If I’m mistaken, please, let me know.) Kanev’s given me several requests, some of them quite challenging. Some too challenging: I have to step back from describing “both context sensitive and not” kinds of grammar just now. I hope all will forgive me if I just introduce the base idea.

    Grammar.

    One of the ideals humans hold when writing a mathematical proof is to crush all humanity from the proof. It’s nothing personal. It reflects a desire to be certain we have proved things without letting any unstated assumptions or unnoticed biases interfering. The 19th century was a lousy century for mathematicians and their intuitions. Many ideas that seemed clear enough turned out to be paradoxical. It’s natural to want to not make those mistakes again. We can succeed.

    We can do this by stripping out everything but the essentials. We can even do away with words. After all, if I say something is a “square”, that suggests I mean what we mean by “square” in English. Our mathematics might not have proved all the square-ness of the thing. And so we reduce the universe to symbols. Letters will do as symbols, if we want to be kind to our typesetters. We do want to be kind now that, thanks to LaTeX, we do our own typesetting.

    This is called building a “formal language”. The “formal” here means “relating to the form” rather than “the way you address people when you can’t just say `heya, gang’.” A formal language has two important components. One is the symbols that can be operated on. The other is the operations you can do on the symbols.

    If we’ve set it all up correctly then we get something wonderful. We have “statements”. They’re strings of the various symbols. Some of the statements are axioms; they’re assumed to be true without proof. We can turn a statement into another one by using a statement we have and one of the operations. If the operation requires, we can add in something else we already know to be true. Something we’ve already proven.

    Any statement we build this way — starting from an axiom and building with the valid operations — is a new and true statement. It’s a theorem. The proof of the theorem? It’s the full sequence of symbols and operations that we’ve built. The line between advanced mathematics and magic is blurred. To give a theorem its full name is to give its proof. (And now you understand why the biographies of many of the pioneering logicians of the late 19th and early 20th centuries include a period of fascination with the Kabbalah and other forms of occult or gnostic mysticism.)

    A grammar is what’s required to describe a language like this. It’s defined to be a quartet of properties. The first property is the collection of symbols that can’t be the end of a statement. These are called nonterminal symbols. The second property is the collection of symbols that can end a statement. These are called terminal symbols. (You see why we want to have those as separate lists.) The third property is the collection of rules that let you build new statements from old. The fourth property is the collection of things we take to be true to start. We only have finitely many options for each of these, at least for your typical grammar. I imagine someone has experimented with infinite grammars. But that hasn’t got to be enough of a research field people have to pay attention to them. Not yet, anyway.

    Now it’s reasonable to ask if we need mathematicians at all. If building up theorems is just a matter of applying the finitely many rules of inference on finitely many collections of symbols, finitely many times over, then what about this can’t be done by computer? And done better by a computer, since a computer doesn’t need coffee, or bathroom breaks an hour later, or the hope of moving to a tenure-track position?

    Well, we do need mathematicians. I don’t say that just because I hope someone will give me money in exchange for doing mathematics. It’s because setting up a computer to just grind out every possible theorem will never turn up what you want to know now. There are several reasons for this.

    Here’s a way to see why. It’s drawn from Douglas Hofstadter’s Gödel, Escher, Bach, a copy of which you can find in any college dorm room or student organization office. At least you could back when I was an undergraduate. I don’t know what the kids today use.

    Anyway, this scheme has three nonterminal symbols: I, M, and U. As a terminal symbol … oh, let’s just use the space at the end of a string. That way everything looks like words. We will include a couple variables, lowercase letters like x and y and z. They stand for any string of nonterminal symbols. They’re falsework. They help us get work done, but must not appear in our final result.

    There’s four rules of inference. The first: if xI is valid, then so is xIM. The second: if Mx is valid, then so is Mxx. The third: if MxIIIy is valid, then so is MxUy. The fourth: if MxUUy is valid, then so is Mxy.

    We have one axiom, assumed without proof to be true: MI.

    So let’s putter around some. MI is true. So by the second rule, so is MII. That’s a theorem. And since MII is true, by the second rule again, so is MIIII. That’s another theorem. Since MIIII is true, by the first rule, so is MIIIIM. We’ve got another theorem already. Since MIIIIM is true, by the third rule, so is MIUM. We’ve got another theorem. For that matter, since MIIIIM is true, again by the third rule, so is MUIM. Would you like MIUMIUM? That’s waiting there to be proved too.

    And that will do. First question: what does any of this even mean? Nobody cares about whether MIUMIUM is a theorem in this system. Nobody cares about figuring out whether MUIUMUIUI might be a theorem. We care about questions like “what’s the smallest odd perfect number?” or “how many equally-strong vortices can be placed in a ring without the system becoming unstable?” With everything reduced to symbol-shuffling like this we’e safe from accidentally assuming something which isn’t justified. But we’re pretty far from understanding what these theorems even mean.

    In this case, these strings don’t mean anything. They’re a toy so we can get comfortable with the idea of building theorems this way. We don’t expect them to do any more work than we expect Lincoln Logs to build usable housing. But you can see how we’re starting pretty far from most interesting mathematics questions.

    Still, if we started from a system that meant something, we would get there in time, right? … Surely? …

    Well, maybe. The thing is, even with this I, M, U scheme and its four rules there are a lot of things to try out. From the first axiom, MI, we can produce either MII or MIM. From MII we can produce MIIM or MIIII. From MIIII we could produce MIIIIM, or MUI, or MIU, or MIIIIIIII. From each of those we can produce … quite a bit of stuff.

    All of those are theorems in this scheme and that’s nice. But it’s a lot. Suppose we have set up symbols and axioms and rules that have clear interpretations that relate to something we care about. If we set the computer to produce every possible legitimate result we are going to produce an enormous number of results that we don’t care about. They’re not wrong, they’re just off-point. And there’s a lot more true things that are off-point than there are true things on-point. We need something with judgement to pick out results that have anything to do with what we want to know. And trying out combinations to see if we can produce the pattern we want is hard. Really hard.

    And there’s worse. If we set up a formal language that matches real mathematics, then we need a lot of work to prove anything. Even simple statements can take forever. I seem to remember my logic professor needing 27 steps to work out the uncontroversial theorem “if x = y and y = z, then x = z”. (Granting he may have been taking the long way around for demonstration purposes.) We would have to look in theorems of unspeakably many symbols to find the good stuff.

    Now it’s reasonable to ask what the point of all this is. Why create a scheme that lets us find everything that can be proved, only to have all we’re interested in buried in garbage?

    There are some uses. To make us swear we’ve read Jorge Luis Borges, for one. Another is to study the theory of what we can prove. That is, what are we able to learn by logical deduction? And another is to design systems meant to let us solve particular kinds of problems. That approach makes the subject merge into computer science. Code for a computer is, in a sense, about how to change a string of data into another string of data. What are the legitimate data to start with? What are the rules by which to change the data? And these are the sorts of things grammars, and the study of grammars, are about.

     
    • Jacob Kanev 7:07 am on Tuesday, 15 March, 2016 Permalink | Reply

      A beautiful post, thank you; and very well explained. I remember our professor linking grammars and Turing machines with Church’s thesis, the fact that the brain is a deterministic machine, and Gödel’s theorem, to arrive at some pretty fundamental claims about perception and knowledge in general. Well, I guess every professor tries to sell their own subject as the most substantial of all. Although he was pretty successful with this one.

      Btw, I do have a wordpress blog: https://jacobkanev.wordpress.com/

      Like

      • Joseph Nebus 7:23 am on Wednesday, 16 March, 2016 Permalink | Reply

        I’m happy to be of service and glad that you liked the essay as it turned out.

        I’d agree with your professor in linking grammars to Turing machines and fundamental ideas about what knowledge we can have. Grammars are ways of describing what we can know about a system, and if we’re looking seriously into the subject that has to bring us to the decidability problems and the limits of knowledge. I’m less sure about perception, but I don’t know what case your professor made.

        And I’m glad for the blog link; thank you.

        Like

    • elkement (Elke Stangl) 7:45 am on Friday, 18 March, 2016 Permalink | Reply

      Great post! I was a die-hard Gödel-Escher-Bach fan :-) That book made my decision to difficult to choose between physics, math, or computer science.

      Like

  • Joseph Nebus 3:00 pm on Friday, 4 March, 2016 Permalink | Reply
    Tags: , , , , logic, ,   

    A Leap Day 2016 Mathematics A To Z: Conjecture 


    For today’s entry in the Leap Day 2016 Mathematics A To Z I have an actual request from from Elke Stangl. I’d had another ‘c’ request, for ‘continued fractions’. I’ve decided to address that by putting ‘Fractions, continued’ on the roster. If you have other requests, for letters not already committed, please let me know. I’ve got some letters I can use yet.

    Conjecture.

    An old joke says a mathematician’s job is to turn coffee into theorems. I prefer tea, which may be why I’m not employed as a mathematician. A theorem is a logical argument that starts from something known to be true. Or we might start from something assumed to be true, if we think the setup interesting and plausible. And it uses laws of logical inference to draw a conclusion that’s also true and, hopefully, interesting. If it isn’t interesting, maybe it’s useful. If it isn’t either, maybe at least the argument is clever.

    How does a mathematician know what theorems to try proving? We could assemble any combination of premises as the setup to a possible theorem. And we could imagine all sorts of possible conclusions. Most of them will be syntactically gibberish, the equivalent of our friends the monkeys banging away on keyboards. Of those that aren’t, most will be untrue, or at least impossible to argue. Of the rest, potential theorems that could be argued, many will be too long or too unfocused to follow. Only a tiny few potential combinations of premises and conclusions could form theorems of any value. How does a mathematician get a good idea where to spend her time?

    She gets it from experience. In learning what theorems, what arguments, have been true in the past she develops a feeling for things that would plausibly be true. In playing with mathematical constructs she notices patterns that seem to be true. As she gains expertise she gets a sense for things that feel right. And she gets a feel for what would be a reasonable set of premises to bundle together. And what kinds of conclusions probably follow from an argument that people can follow.

    This potential theorem, this thing that feels like it should be true, a conjecture.

    Properly, we don’t know whether a conjecture is true or false. The most we can say is that we don’t have evidence that it’s false. New information might show that we’re wrong and we would have to give up the conjecture. Finding new examples that it’s true might reinforce our idea that it’s true, but that doesn’t prove it’s true.

    For example, we have the Goldbach Conjecture. According to it every even number greater than two can be written as the sum of exactly two prime numbers. The evidence for it is very good: every even number we’ve tied has worked out, up through at least 4,000,000,000,000,000,000. But it isn’t proven. It’s possible that it’s impossible from the standard rules of arithmetic.

    That’s a famous conjecture. It’s frustrated mathematicians for centuries. It’s easy to understand and nobody’s found a proof. Famous conjectures, the ones that get names, tend to do that. They looked nice and simple and had hidden depths.

    Most conjectures aren’t so storied. They instead appear as notes at the end of a section in a journal article or a book chapter. Or they’re put on slides meant to refresh the audience’s interest where it’s needed. They are needed at the fifteen-minute park of a presentation, just after four slides full of dense equations. They are also needed at the 35-minute mark, in the middle of a field of plots with too many symbols and not enough labels. And one’s needed just before the summary of the talk, so that the audience can try to remember what the presentation was about and why they thought they could understand it. If the deadline were not so tight, if the conference were a month or so later, perhaps the mathematician would find a proof for these conjectures.

    Perhaps. As above, some conjectures turn out to be hard. Fermat’s Last Theorem stood for four centuries as a conjecture. Its first proof turned out to be nothing like anything Fermat could have had in mind. Mathematics popularizers lost an easy hook when that was proven. We used to be able to start an essay on Fermat’s Last Theorem by huffing about how it was properly a conjecture but the wrong term stuck to it because English is a perverse language. Now we have to start by saying how it used to be a conjecture instead.

    But few are like that. Most conjectures are ideas that feel like they ought to be true. They appear because a curious mind will look for new ideas that resemble old ones, or will notice patterns that seem to resemble old patterns.

    And sometimes conjectures turn out to be false. Something can look like it ought to be true, or maybe would be true, and yet be false. Often we can prove something isn’t true by finding an example, just as you might expect. But that doesn’t mean it’s easy. Here’s a false conjecture, one that was put forth by Goldbach. All odd numbers are either prime, or can be written as the sum of a prime and twice a square number. (He considered 1 to be a prime number.) It’s not true, but it took over a century to show that. If you want to find a counterexample go ahead and have fun trying.

    Still, if a mathematician turns coffee into theorems, it is through the step of finding conjectures, promising little paths in the forest of what is not yet known.

     
    • elkement (Elke Stangl) 9:38 pm on Friday, 4 March, 2016 Permalink | Reply

      Thanks :-) So you say that experts’ intuition that might look like magic to laymen is actually pattern recognition, correct? (I think I have read about this in pop-sci psychology books) And if an unproven theorem passes the pattern recognition filter it is promoted to conjecture.

      Like

      • Joseph Nebus 7:27 am on Wednesday, 9 March, 2016 Permalink | Reply

        I think that there is a large aspect of it that’s pattern recognition, yes. But some of that may be that we look for things that resemble what’s already worked. So, like, if we already have a theorem about how a sequence of real-valued functions converges to a new real-valued function, then it’s natural to think about variants. Can we say something about sequences of complex-valued functions? If the original theorem demanded functions that were continuous and had infinitely many derivatives, can we loosen that to a function that’s continuous and has only finitely many derivatives? Can we lose the requirement that there be derivatives and still say something?

        I realized at one point while taking real analysis in grad school that many of the theorems we were moving into looked a lot like what we already had with one or two variations, and could sometimes write out the next theorem almost by rote. There is certainly a kind of pattern recognition at work here, though sometimes it can feel like playing with the variations on a theme.

        Liked by 1 person

        • elkement (Elke Stangl) 7:37 am on Wednesday, 9 March, 2016 Permalink | Reply

          Yes, I agree – I meant pattern recognition in exactly this way, in a very broad way … searching for a similar pattern in your own experiences, among things you have encountered and that worked. I was thinking in general terms and comparing to other skills and expertise, like what makes you successful in any kind of tech troubleshooting. It seems that you have an intuitive feeling about what may work but actually you draw on related scenarios or aspects of scenarios we had solved.

          Like

    • Pen & Shutter 1:09 pm on Saturday, 5 March, 2016 Permalink | Reply

      I understood all that! I definitely deserve a prize … I am no mathematician … And I enjoyed every word! I love your use of English.

      Like

    • davekingsbury 3:25 pm on Saturday, 5 March, 2016 Permalink | Reply

      If you’ve nothing for Q, what about Quadratic Equations … though I start twitching whenever I think about them!

      Like

      • Joseph Nebus 7:43 am on Wednesday, 9 March, 2016 Permalink | Reply

        I’m sorry to say Q already got claimed, by ‘quaternion’. But P got ‘polynomial’, which should be close enough to quadratic equations that there’s at least some help there.

        Liked by 1 person

  • Joseph Nebus 3:00 pm on Monday, 29 February, 2016 Permalink | Reply
    Tags: , axioms, , , logic, reality   

    A Leap Day 2016 Mathematics A To Z: Axiom 


    I had a great deal of fun last summer with an A To Z glossary of mathematics terms. To repeat a trick with some variation, I called for requests a couple weeks back. I think the requests have settled down so let me start. (However, if you’ve got a request for one of the latter alphabet letters, please let me know. There’s ten letters not yet committed.) I’m going to call this a Leap Day 2016 Mathematics A To Z to mark when it sets off. This way I’m not committed to wrapping things up before a particular season ends. On, now, to the start and the first request, this one from Elke Stangl:

    Axiom.

    Mathematics is built of arguments. Ideally, these are all grounded in deductive logic. These would be arguments that start from things we know to be true, and use the laws of logical inference to conclude other things that are true. We want valid arguments, ones in which every implication is based on true premises and correct inferences. In practice we accept some looseness about this, because it would just take forever to justify every single little step. But the structure is there. From some things we know to be true, deduce something we hadn’t before proven was true.

    But where do we get things we know to be true? Well, we could ask the philosophy department. The question’s one of their specialties. But we might be scared of them, and they of us. After all, the mathematics department and the philosophy department are only usually both put in the College of Arts and Sciences. Sometimes philosophy is put in the College of Humanities instead. Let’s stay where we were instead.

    We know to be true stuff we’ve already proved to be true. So we can use the results of arguments we’ve already finished. That’s comforting. Whatever work we, or our forerunners, have done was not in vain. But how did we know those results were true? Maybe they were the consequences of earlier stuff we knew to be true. Maybe they came from earlier valid arguments.

    You see the regression problem. We don’t have anything we know to be true except the results of arguments, and the arguments depended on having something true to build from. We need to start somewhere.

    The real world turns out to be a poor starting point, by the way. Oh, it’s got some good sides. Reality is useful in many ways, but it has a lot of problems to be resolved. Most things we could say about the real world are transitory: they were once untrue, became true, and will someday be false again. It’s hard to see how you can build a universal truth on a transitory foundation. And that’s even if we know what’s true in the real world. We have senses that seem to tell us things about the real world. But the philosophy department, if we eavesdrop on them, would remind us of some dreadful implications. The concept of “the real world” is hard to make precise. Even if we suppose we’ve done that, we don’t know that what we could perceive has anything to do with the real world. The folks in the psychology department and the people who study physiology reinforce the direness of the situation. Even if perceptions can tell us something relevant, and even if our senses aren’t deliberately deceived, they’re still bad at perceiving stuff. We need to start somewhere else if we want certainty.

    That somewhere is the axiom. We declare some things to be a kind of basic law. Here are some thing we need not prove true; they simply are.

    (Sometimes mathematicians say “postulate” instead of “axiom”. This is because some things sound better called “postulates”. Meanwhile other things sound better called “axioms”. There is no functional difference.)

    Most axioms tend to be straightforward things. We tend to like having uncontroversial foundations for our arguments. It may hardly seem necessary to say “all right angles are congruent”, but how would you prove that? It may seem obvious that, given a collection of sets of things, it’s possible to select exactly one thing from each of those sets. How do you know you can?

    Well, they might follow from some other axioms, by some clever enough argument. This is possible. Mathematicians consider it elegant to have as few axioms as necessary for their work. (They’re not alone, or rare, in that preference.) I think that reflects a cultural desire to say as much as possible with as little work as possible. The more things we have to assume to show a thing is true, the more likely that in a new application one of those assumptions won’t hold. And that would spoil our knowledge of that conclusion. Sometimes we can show the interesting point of one axiom could be derived from some other axiom or axioms. We might replace an axiom with these alternates if that gives us more enlightening arguments.

    Sometimes people seize on this whole axiom business to argue that mathematics (and science, dragged along behind) is a kind of religion. After all, you need to have faith that some things are true. This strikes me as bad theology and poor mathematics. The most obvious difference between an article of faith and an axiom must be that axioms are voluntary. They are things you assume to be true because you expect them to enlighten something you wish to study. If they don’t, you’re free to try other axioms.

    The axiom I mentioned three paragraphs back, about selecting exactly one thing from each of a collection of sets? That’s known as the Axiom of Choice. It’s used in the theory of sets. But you don’t have to assume it’s true. Much of set theory stands independent of it. Many set theorists go about their work neither committing to the idea that it’s true or that it’s false.

    What makes a good set of axioms is rather like what makes a good set of rules for a sport. You do want to have a set that’s reasonably clear. You want them to provide for many interesting consequences. You want them to not have any contradictions. (You settle for them having no contradictions anyone’s found or suspects.) You want them to have as few ambiguities as possible. What makes up that set may evolve as the field, or as the sport, evolves. People do things that weren’t originally thought about. People get more experience and more perspective on the way the rules are laid out. People notice they had been assuming something without stating it. We revise and, we hope, improve the foundations with time.

    There’s no guarantee that every set of axioms will produce something interesting. Well, you wouldn’t expect to necessarily get a playable game by throwing together some random collection of rules from several different sports, either. Most mathematicians stick to familiar groups of axioms, for the same reason most athletes stick to sports they didn’t make up. We know from long experience that this set will give us an interesting geometry, or calculus, or topology, or so on.

    There’ll never be a standard universal set of axioms covering all mathematics. There are different sets of axioms that directly contradict each other but that are, to the best of our knowledge, internally self-consistent. The axioms that describe geometry on a flat surface, like a map, are inconsistent with those that describe geometry on a curved surface, like a globe. We need both maps and globes. So we have both flat and curved geometries, and we decide what kind fits the work we want to do.

    And there’ll never be a complete list of axioms for any interesting field, either. One of the unsettling discoveries of 20th Century logic was of incompleteness. Any set of axioms interesting enough to cover the ability to do arithmetic will have statements that would be meaningful, but that can’t be proven true or false. We might add some of these undecidable things to the set of axioms, if they seem useful. But we’ll always have other things not provably true or provably false.

     
    • gaurish 3:30 pm on Monday, 29 February, 2016 Permalink | Reply

      Amazing explanation :)

      Like

    • howardat58 5:33 pm on Monday, 29 February, 2016 Permalink | Reply

      It is difficult to believe that none of this geometry stuff existed before Euclid. His contribution was to show that an abstract system based on some reasonable axioms, those which matched practical experience, could be constructed and from which all the results and conclusions would follow, WITHOUT the use of pictures and hand-waving. Euclid’s definition of a line, “That which has no breadth”, makes it impossible to draw one !!! Nobody attempted to do this to even the natural numbers until Peano and others in 1900-1909
      https://en.wikipedia.org/wiki/Peano_axioms
      (worth a read)

      Like

      • Joseph Nebus 8:50 pm on Tuesday, 1 March, 2016 Permalink | Reply

        I don’t mean to suggest I think geometry started with Euclid. I’d be surprised if it turned out Euclid were even the first Ancient Greek to have a system which we’d recognize as organized and logically rigorous geometry. But the record of evidence is scattered, and Euclid did do so very well that he must have obliterated his precursors. It’s got to be something like how The Jazz Singer obliterates memory of the synchronized-sound movies made before then.

        The problems with definitions does point out something true about axioms. The obvious stuff, like what we mean by a line, is often extremely hard to explain. Perhaps it’s because the desire to explain terms using only simpler terms leaves us without the vocabulary or even the concepts to do work. Perhaps it’s that the most familiar things carry with them so many connotations and unstated assumptions we don’t know how to separate them out again.

        Peano axioms are a great read, yes. I’m a bit sad my undergraduate training in mathematics never gave me reason to study them directly; we were preparing for other things.

        Like

    • elkement (Elke Stangl) 7:19 am on Tuesday, 1 March, 2016 Permalink | Reply

      Thanks for the mention, but Axiom Fame should go to Christopher Adamson. He suggested Axiom and I suggested Conjecture in the Requests comment thread :-)

      Like

  • Joseph Nebus 3:00 pm on Monday, 11 January, 2016 Permalink | Reply
    Tags: , , logic,   

    Reading the Comics, January 8, 2015: Rerun-Heavy Edition 


    I couldn’t think of what connective theme there might be to the mathematically-themed comic strips of the last couple days. It finally struck me: there’s a lot of reruns in this. That’ll do. Most of them are reruns from before I started writing about comics so much in these parts.

    Bill Watterson’s Calvin and Hobbes for the 5th of January (a rerun, of course, from the 7th of January, 1986) is a kid-resisting-the-test joke. The particular form is trying to claim a religious exemption from mathematics tests. I sometimes see attempts to claim that mathematics is a kind of religion since, after all, you have to believe it’s true. I’ll grant that you do have to assume some things without proof. Those are the rules of logical inference, and the axioms of the field, particularly. But I can’t make myself buy a definition of “religion” that’s just “something you believe”.

    But there are religious overtones to a lot of mathematics. The field promises knowable universal truths, things that are true regardless of who and in what context might know them. And the study of mathematical infinity seems to inspire thoughts of God. Amir D Aczel’s The Mystery Of The Aleph: Mathematics, The Kabbala, and the Search for Infinity is a good read on the topic. Addition is still not a kind of religion, though.

    'My second boyfriend has a brain as big as a large seedless watermelon.' 'Robert, what is the square root of 2,647,129?' '1627 and how do you get ink stains out of your shirt pocket?'

    Bud Grace’s The Piranha Club for the 6th of January, 2015.

    Bud Grace’s The Piranha Club for the 6th of January uses the ability to do arithmetic as proof of intelligence. It’s a kind of intelligence, sure. There’s fun to be had in working out a square root in your head, or on paper. But there’s really no need for it now that we’ve got calculator technology, except for what it teaches you about how to compute.

    Ruben Bolling’s Super-Fun-Pak Comix for the 6th of June is an installment of A Voice From Another Dimension. It’s just what the title suggests, and of course it would have to be a three-panel comic. The idea that creatures could live in more, or fewer, dimensions of space is a captivating one. It’s challenging to figure how it could work, though. Spaces of one or two dimensions don’t seem like they would allow biochemistry to work. And, as I understand it, chemistry itself seems unlikely to work right in four or more dimensions of space too. But it’s still fun to think about.

    David L Hoyt and Jeff Knurek’s Jumble for the 7th of January is a counting-number joke. It does encourage asking whether numbers are created or discovered, which is a tough question. Counting numbers like “four” are so familiar and so apparently universal that they don’t seem to be constructs. (Even if they are, animals have an understanding of at least small counting numbers like these.) But if “four” is somehow not a human construct, then what about “4,000, 000,000, 000,000, 000,000, 000,000, 000,000”, a number so large it’s hard to think of something we have that many of that we can visualize. And even if that is, “one fourth” seems a bit different from that, and “four i” — the number which, squared, gives us negative 16 — seems qualitatively different. But if they’re constructs, then why do they correspond well to things we can see in the real world?

    LIHYL (O O - - -), RUCYR (O - - O -), AMDTEN (O - - O O -), GAULEE (- O - O - O). The number that equals four plus four didn't exist until it was `(- - -) (- - - - -) (- -)'. Dashes between the parentheses in that last answer because it's some wordplay there.

    David L Hoyt and Jeff Knurek’s Jumble for the 7th of January, 2016. The link will likely expire around mid-February.

    Greg Curfman’s Meg Classics for the 7th of January originally ran the 19th of September, 1997. It’s about a kid distractingly interested in multiplication. You get these sometimes. My natural instinct is to put the bigger number first and the smaller number second in a multiplication. “2 times 27” makes me feel nervous in a way “27 times 2” never will.

    Hector D Cantu and Carlos Castellanos’s Baldo for the 8th of January is a rerun from 2011. It’s an old arithmetic joke. I wouldn’t be surprised if George Burns and Gracie Allen did it. (Well, a little surprised. Gracie Allen didn’t tend to play quite that kind of dumb. But everybody tells some jokes that are a little out of character.)

     
c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: