Tagged: set theory Toggle Comment Threads | Keyboard Shortcuts

  • Joseph Nebus 6:00 pm on Wednesday, 13 September, 2017 Permalink | Reply
    Tags: , , , set theory, , ,   

    The Summer 2017 Mathematics A To Z: Topology 


    Today’s glossary entry comes from Elke Stangl, author of the Elkemental Force blog. I’ll do my best, although it would have made my essay a bit easier if I’d had the chance to do another topic first. We’ll get there.

    Topology.

    Start with a universe. Nice thing to have around. Call it ‘M’. I’ll get to why that name.

    I’ve talked a fair bit about weird mathematical objects that need some bundle of traits to be interesting. So this will change the pace some. Here, I request only that the universe have a concept of “sets”. OK, that carries a little baggage along with it. We have to have intersections and unions. Those come about from having pairs of sets. The intersection of two sets is all the things that are in both sets simultaneously. The union of two sets is all the things that are in one set, or the other, or both simultaneously. But it’s hard to think of something that could have sets that couldn’t have intersections and unions.

    So from your universe ‘M’ create a new collection of things. Call it ‘T’. I’ll get to why that name. But if you’ve formed a guess about why, then you know. So I suppose I don’t need to say why, now. ‘T’ is a collection of subsets of ‘M’. Now let’s suppose these four things are true.

    First. ‘M’ is one of the sets in ‘T’.

    Second. The empty set ∅ (which has nothing at all in it) is one of the sets in ‘T’.

    Third. Whenever two sets are in ‘T’, their intersection is also in ‘T’.

    Fourth. Whenever two (or more) sets are in ‘T’, their union is also in ‘T’.

    Got all that? I imagine a lot of shrugging and head-nodding out there. So let’s take that. Your universe ‘M’ and your collection of sets ‘T’ are a topology. And that’s that.

    Yeah, that’s never that. Let me put in some more text. Suppose we have a universe that consists of two symbols, say, ‘a’ and ‘b’. There’s four distinct topologies you can make of that. Take the universe plus the collection of sets {∅}, {a}, {b}, and {a, b}. That’s a topology. Try it out. That’s the first collection you would probably think of.

    Here’s another collection. Take this two-thing universe and the collection of sets {∅}, {a}, and {a, b}. That’s another topology and you might want to double-check that. Or there’s this one: the universe and the collection of sets {∅}, {b}, and {a, b}. Last one: the universe and the collection of sets {∅} and {a, b} and nothing else. That one barely looks legitimate, but it is. Not a topology: the universe and the collection of sets {∅}, {a}, and {b}.

    The number of toplogies grows surprisingly with the number of things in the universe. Like, if we had three symbols, ‘a’, ‘b’, and ‘c’, there would be 29 possible topologies. The universe of the three symbols and the collection of sets {∅}, {a}, {b, c}, and {a, b, c}, for example, would be a topology. But the universe and the collection of sets {∅}, {a}, {b}, {c}, and {a, b, c} would not. It’s a good thing to ponder if you need something to occupy your mind while awake in bed.

    With four symbols, there’s 355 possibilities. Good luck working those all out before you fall asleep. Five symbols have 6,942 possibilities. You realize this doesn’t look like any expected sequence. After ‘4’ the count of topologies isn’t anything obvious like “two to the number of symbols” or “the number of symbols factorial” or something.

    Are you getting ready to call me on being inconsistent? In the past I’ve talked about topology as studying what we can know about geometry without involving the idea of distance. How’s that got anything to do with this fiddling about with sets and intersections and stuff?

    So now we come to that name ‘M’, and what it’s finally mnemonic for. I have to touch on something Elke Stangl hoped I’d write about, but a letter someone else bid on first. That would be a manifold. I come from an applied-mathematics background so I’m not sure I ever got a proper introduction to manifolds. They appeared one day in the background of some talk about physics problems. I think they were introduced as “it’s a space that works like normal space”, and that was it. We were supposed to pretend we had always known about them. (I’m translating. What we were actually told would be that it “works like R3”. That’s how mathematicians say “like normal space”.) That was all we needed.

    Properly, a manifold is … eh. It’s something that works kind of like normal space. That is, it’s a set, something that can be a universe. And it has to be something we can define “open sets” on. The open sets for the manifold follow the rules I gave for a topology above. You can make a collection of these open sets. And the empty set has to be in that collection. So does the whole universe. The intersection of two open sets in that collection is itself in that collection. The union of open sets in that collection is in that collection. If all that’s true, then we have a manifold.

    And now the piece that makes every pop mathematics article about topology talk about doughnuts and coffee cups. It’s possible that two topologies might be homeomorphic to each other. “Homeomorphic” is a term of art. But you understand it if you remember that “morph” means shape, and suspect that “homeo” is probably close to “homogenous”. Two things being homeomorphic means you can match their parts up. In the matching there’s nothing left over in the first thing or the second. And the relations between the parts of the first thing are the same as the relations between the parts of the second thing.

    So. Imagine the snippet of the number line for the numbers larger than -π and smaller than π. Think of all the open sets you can use to cover that. It will have a set like “the numbers bigger than 0 and less than 1”. A set like “the numbers bigger than -π and smaller than 2.1”. A set like “the numbers bigger than 0.01 and smaller than 0.011”. And so on.

    Now imagine the points that exist on a circle, if you’ve omitted one point. Let’s say it’s the unit circle, centered on the origin, and that what we’re leaving out is the point that’s exactly to the left of the origin. The open sets for this are the arcs that cover some part of this punctured circle. There’s the arc that corresponds to the angles from 0 to 1 radian measure. There’s the arc that corresponds to the angles from -π to 2.1 radians. There’s the arc that corresponds to the angles from 0.01 to 0.011 radians. You see where this is going. You see why I say we can match those sets on the number line to the arcs of this punctured circle. There’s some details to fill in here. But you probably believe me this could be done if I had to.

    There’s two (or three) great branches of topology. One is called “algebraic topology”. It’s the one that makes for fun pop mathematics articles about imaginary rubber sheets. It’s called “algebraic” because this field makes it natural to study the holes in a sheet. And those holes tend to form groups and rings, basic pieces of Not That Algebra. The field (I’m told) can be interpreted as looking at functors on groups and rings. This makes for some neat tying-together of subjects this A To Z round.

    The other branch is called “differential topology”, which is a great field to study because it sounds like what Mister Spock is thinking about. It inspires awestruck looks where saying you study, like, Bayesian probability gets blank stares. Differential topology is about differentiable functions on manifolds. This gets deep into mathematical physics.

    As you study mathematical physics, you stop worrying about ever solving specific physics problems. Specific problems are petty stuff. What you like is solving whole classes of problems. A steady trick for this is to try to find some properties that are true about the problem regardless of what exactly it’s doing at the time. This amounts to finding a manifold that relates to the problem. Consider a central-force problem, for example, with planets orbiting a sun. A planet can’t move just anywhere. It can only be in places and moving in directions that give the system the same total energy that it had to start. And the same linear momentum. And the same angular momentum. We can match these constraints to manifolds. Whatever the planet does, it does it without ever leaving these manifolds. To know the shapes of these manifolds — how they are connected — and what kinds of functions are defined on them tells us something of how the planets move.

    The maybe-third branch is “low-dimensional topology”. This is what differential topology is for two- or three- or four-dimensional spaces. You know, shapes we can imagine with ease in the real world. Maybe imagine with some effort, for four dimensions. This kind of branches out of differential topology because having so few dimensions to work in makes a lot of problems harder. We need specialized theoretical tools that only work for these cases. Is that enough to count as a separate branch? It depends what topologists you want to pick a fight with. (I don’t want a fight with any of them. I’m over here in numerical mathematics when I’m not merely blogging. I’m happy to provide space for anyone wishing to defend her branch of topology.)

    But each grows out of this quite general, quite abstract idea, also known as “point-set topology”, that’s all about sets and collections of sets. There is much that we can learn from thinking about how to collect the things that are possible.

    Advertisements
     
    • gaurish 5:31 pm on Thursday, 14 September, 2017 Permalink | Reply

      I am really happy that you didn’t start with “Topology is also known as rubber sheet geometry”.

      Like

      • Joseph Nebus 1:46 am on Friday, 15 September, 2017 Permalink | Reply

        Although I never know precisely what I’m going to write before I put in the first paragraph, I did resolve that I was going to put off rubber sheets, as well as coffee cups, as long as I possibly could.

        Liked by 1 person

    • elkement (Elke Stangl) 7:33 am on Tuesday, 19 September, 2017 Permalink | Reply

      Great post! I was interested in your take as there are different ways to introduce manifolds in theoretical physics – I worked through different General Relativity textbooks / courses in parallel: One lecturer insisted that you need to treat that stuff “with the rigor of a mathematician”, and he went to great lengths to point out why a manifold is different from “normal space”. Others use the typical physicist’s approach of avoiding all specialized terms like fiber bundles and pushbacks, calling everything a “vector field” and “space”, only alluding to comprehensible familiar structures that sort of work in the same way – and somehow still managed to get across the messages and theorems in the end. But the rigorous lecturer said that it was exactly confusing the actual space (or spacetime) and a manifold that had stalled and confused Einstein for many years – so I suppose one should really learn the mathematics thoroughly here …
      On the other hand from what you say it seems to me that manifolds have sort of emerged as a tool in physics, and so Einstein had to create or inspire new mathematics as he went along … while today we can build on this and after we learned the rigorous stuff it is probably OK to fall back into the typical physicist’s mode. (Landau / Lifshitz are my favorite resource in the latter class – the treat GR very concisely in the volume on the Classical Theory of Fields, part of their 10-volume Course of Theoretical Physics – and they use hardly any specialized term related to topologies).

      Like

      • Joseph Nebus 8:10 pm on Friday, 22 September, 2017 Permalink | Reply

        Thank you so. Well, I’ve shared just how I got introduced to manifolds myself. I come from a more mathematical physics background and it’s a little surprising how often things would be introduced casually, trusting that the precise details would be filled in later. Sometimes they even were. I don’t think that’s idiosyncratic to my school, although it was a heavily applied-mathematics department. (The joke was that we had two tracks, Applied Mathematics and More Applied Mathematics.)

        I’m not very well-studied in the history of modern physics, at least not in how the mathematical models develop. But I think that you have a good read on it, that we started to get manifolds because they solved some very specific niche problems well. And then treated rigorously they promised more, and then people started looking for problems they could solve. I think that’s probably more common a history for mathematical structures than people realize. But, as you point out, that doesn’t mean everyone’s going to see the tool as worth learning how to use.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Saturday, 2 September, 2017 Permalink | Reply
    Tags: , , set theory, ,   

    The Summer 2017 Mathematics A To Z: Open Set 


    Today’s glossary entry is another request from Elke Stangl, author of the Elkemental Force blog. I’m hoping this also turns out to be a well-received entry. Half of that is up to you, the kind reader. At least I hope you’re a reader. It’s already gone wrong, as it was supposed to be Friday’s entry. I discovered I hadn’t actually scheduled it while I was too far from my laptop to do anything about that mistake. This spoils the nice Monday-Wednesday-Friday routine of these glossary entries that dates back to the first one I ever posted and just means I have to quit forever and not show my face ever again. Sorry, Ulam Spiral. Someone else will have to think of you.

    Open Set.

    Mathematics likes to present itself as being universal truths. And it is. At least if we allow that the rules of logic by which mathematics works are universal. Suppose them to be true and the rest follows. But we start out with intuition, with things we observe in the real world. We’re happy when we can remove the stuff that’s clearly based on idiosyncratic experience. We find something that’s got to be universal.

    Sets are pretty abstract things, as mathematicians use the term. They get to be hard to talk about; we run out of simpler words that we can use. A set is … a bunch of things. The things are … stuff that could be in a set, or else that we’d rule out of a set. We can end up better understanding things by drawing a picture. We draw the universe, which is a rectangular block, sometimes with dashed lines as the edges. The set is some blotch drawn on the inside of it. Some shade it in to emphasize which stuff we want in the set. If we need to pick out a couple things in the universe we drop in dots or numerals. If we’re rigorous about the drawing we could create a Venn Diagram.

    When we do this, we’re giving up on the pure mathematical abstraction of the set. We’re replacing it with a territory on a map. Several territories, if we have several sets. The territories can overlap or be completely separate. We’re subtly letting our sense of geography, our sense of the spaces in which we move, infiltrate our understanding of sets. That’s all right. It can give us useful ideas. Later on, we’ll try to separate out the ideas that are too bound to geography.

    A set is open if whenever you’re in it, you can’t be on its boundary. We never quite have this in the real world, with territories. The border between, say, New Jersey and New York becomes this infinitesimally slender thing, as wide in space as midnight is in time. But we can, with some effort, imagine the state. Imagine being as tiny in every direction as the border between two states. Then we can imagine the difference between being on the border and being away from it.

    And not being on the border matters. If we are not on the border we can imagine the problem of getting to the border. Pick any direction; we can move some distance while staying inside the set. It might be a lot of distance, it might be a tiny bit. But we stay inside however we might move. If we are on the border, then there’s some direction in which any movement, however small, drops us out of the set. That’s a difference in kind between a set that’s open and a set that isn’t.

    I say “a set that’s open and a set that isn’t”. There are such things as closed sets. A set doesn’t have to be either open or closed. It can be neither, a set that includes some of its borders but not other parts of it. It can even be both open and closed simultaneously. The whole universe, for example, is both an open and a closed set. The empty set, with nothing in it, is both open and closed. (This looks like a semantic trick. OK, if you’re in the empty set you’re not on its boundary. But you can’t be in the empty set. So what’s going on? … The usual. It makes other work easier if we call the empty set ‘open’. And the extra work we’d have to do to rule out the empty set doesn’t seem to get us anything interesting. So we accept what might be a trick.) The definitions of ‘open’ and ‘closed’ don’t exclude one another.

    I’m not sure how this confusing state of affairs developed. My hunch is that the words ‘open’ and ‘closed’ evolved independent of each other. Why do I think this? An open set has its openness from, well, not containing its boundaries; from the inside there’s always a little more to it. A closed set has its closedness from sequences. That is, you can consider a string of points inside a set. Are these points leading somewhere? Is that point inside your set? If a string of points always leads to somewhere, and that somewhere is inside the set, then you have closure. You have a closed set. I’m not sure that the terms were derived with that much thought. But it does explain, at least in terms a mathematician might respect, why a set that isn’t open isn’t necessarily closed.

    Back to open sets. What does it mean to not be on the boundary of the set? How do we know if we’re on it? We can define sets by all sorts of complicated rules: complex-valued numbers of size less than five, say. Rational numbers whose denominator (in lowest form) is no more than ten. Points in space from which a satellite dropped would crash into the moon rather than into the Earth or Sun. If we have an idea of distance we could measure how far it is from a point to the nearest part of the boundary. Do we need distance, though?

    No, it turns out. We can get the idea of open sets without using distance. Introduce a neighborhood of a point. A neighborhood of a point is an open set that contains that point. It doesn’t have to be small, but that’s the connotation. And we get to thinking of little N-balls, circle or sphere-like constructs centered on the target point. It doesn’t have to be N-balls. But we think of them so much that we might as well say it’s necessary. If every point in a set has a neighborhood around it that’s also inside the set, then the set’s open.

    You’re going to accuse me of begging the question. Fair enough. I was using open sets to define open sets. This use is all right for an intuitive idea of what makes a set open, but it’s not rigorous. We can give in and say we have to have distance. Then we have N-balls and we can build open sets out of balls that don’t contain the edges. Or we can try to drive distance out of our idea of open sets.

    We can do it this way. Start off by saying the whole universe is an open set. Also that the union of any number of open sets is also an open set. And that the intersection of any finite number of open sets is also an open set. Does this sound weak? So it sounds weak. It’s enough. We get the open sets we were thinking of all along from this.

    This works for the sets that look like territories on a map. It also works for sets for which we have some idea of distance, however strange it is to our everyday distances. It even works if we don’t have any idea of distance. This lets us talk about topological spaces, and study what geometry looks like if we can’t tell how far apart two points are. We can, for example, at least tell that two points are different. Can we find a neighborhood of one that doesn’t contain the other? Then we know they’re some distance apart, even without knowing what distance is.

    That we reached so abstract an idea of what an open set is without losing the idea’s usefulness suggests we’re doing well. So we are. It also shows why Nicholas Bourbaki, the famous nonexistent mathematician, thought set theory and its related ideas were the core of mathematics. Today category theory is a more popular candidate for the core of mathematics. But set theory is still close to the core, and much of analysis is about what we can know from the fact of sets being open. Open sets let us explain a lot.

     
    • elkement (Elke Stangl) 9:52 am on Sunday, 3 September, 2017 Permalink | Reply

      Thanks – beautifully written and very interesting :-)

      Like

    • gaurish 10:17 am on Sunday, 3 September, 2017 Permalink | Reply

      Whenever I study analysis/topology, I can’t stop myself from appreciating this simple yet powerful idea.

      Liked by 1 person

      • Joseph Nebus 1:17 am on Friday, 8 September, 2017 Permalink | Reply

        It’s a great concept, and one more powerful than it looks. It’s hard to explain how open-ness creeps in to everything, and why it offers something useful that closed-ness doesn’t.

        Like

  • Joseph Nebus 6:00 pm on Saturday, 31 December, 2016 Permalink | Reply
    Tags: 19th Century, , Axiom of Choice, continuum hypothesis, , , , set theory, ZFC   

    The End 2016 Mathematics A To Z: Zermelo-Fraenkel Axioms 


    gaurish gave me a choice for the Z-term to finish off the End 2016 A To Z. I appreciate it. I’m picking the more abstract thing because I’m not sure that I can explain zero briefly. The foundations of mathematics are a lot easier.

    Zermelo-Fraenkel Axioms

    I remember the look on my father’s face when I asked if he’d tell me what he knew about sets. He misheard what I was asking about. When we had that straightened out my father admitted that he didn’t know anything particular. I thanked him and went off disappointed. In hindsight, I kind of understand why everyone treated me like that in middle school.

    My father’s always quick to dismiss how much mathematics he knows, or could understand. It’s a common habit. But in this case he was probably right. I knew a bit about set theory as a kid because I came to mathematics late in the “New Math” wave. Sets were seen as fundamental to why mathematics worked without being so exotic that kids couldn’t understand them. Perhaps so; both my love and I delighted in what we got of set theory as kids. But if you grew up before that stuff was popular you probably had a vague, intuitive, and imprecise idea of what sets were. Mathematicians had only a vague, intuitive, and imprecise idea of what sets were through to the late 19th century.

    And then came what mathematics majors hear of as the Crisis of Foundations. (Or a similar name, like Foundational Crisis. I suspect there are dialect differences here.) It reflected mathematics taking seriously one of its ideals: that everything in it could be deduced from clearly stated axioms and definitions using logically rigorous arguments. As often happens, taking one’s ideals seriously produces great turmoil and strife.

    Before about 1900 we could get away with saying that a set was a bunch of things which all satisfied some description. That’s how I would describe it to a new acquaintance if I didn’t want to be treated like I was in middle school. The definition is fine if we don’t look at it too hard. “The set of all roots of this polynomial”. “The set of all rectangles with area 2”. “The set of all animals with four-fingered front paws”. “The set of all houses in Central New Jersey that are yellow”. That’s all fine.

    And then if we try to be logically rigorous we get problems. We always did, though. They’re embodied by ancient jokes like the person from Crete who declared that all Cretans always lie; is the statement true? Or the slightly less ancient joke about the barber who shaves only the men who do not shave themselves; does he shave himself? If not jokes these should at least be puzzles faced in fairy-tale quests. Logicians dressed this up some. Bertrand Russell gave us the quite respectable “The set consisting of all sets which are not members of themselves”, and asked us to stare hard into that set. To this we have only one logical response, which is to shout, “Look at that big, distracting thing!” and run away. This satisfies the problem only for a while.

    The while ended in — well, that took a while too. But between 1908 and the early 1920s Ernst Zermelo, Abraham Fraenkel, and Thoralf Skolem paused from arguing whose name would also be the best indie rock band name long enough to put set theory right. Their structure is known as Zermelo-Fraenkel Set Theory, or ZF. It gives us a reliable base for set theory that avoids any contradictions or catastrophic pitfalls. Or does so far as we have found in a century of work.

    It’s built on a set of axioms, of course. Most of them are uncontroversial, things like declaring two sets are equivalent if they have the same elements. Declaring that the union of sets is itself a set. Obvious, sure, but it’s the obvious things that we have to make axioms. Maybe you could start an argument about whether we should just assume there exists some infinitely large set. But if we’re aware sets probably have something to teach us about numbers, and that numbers can get infinitely large, then it seems fair to suppose that there must be some infinitely large set. The axioms that aren’t simple obvious things like that are too useful to do without. They assume stuff like that no set is an element of itself. Or that every set has a “power set”, a new set comprised of all the subsets of the original set. Good stuff to know.

    There is one axiom that’s controversial. Not controversial the way Euclid’s Parallel Postulate was. That’s ugly one about lines crossing another line meeting on the same side they make angles smaller than something something or other. That axiom was controversial because it read so weird, so needlessly complicated. (It isn’t; it’s exactly as complicated as it must be. Or better, it’s as simple as it could possibly be and still be useful.) The controversial axiom of Zermelo-Fraenkel Set Theory is known as the Axiom of Choice. It says if we have a collection of mutually disjoint sets, each with at least one thing in them, then it’s possible to pick exactly one item from each of the sets.

    It’s impossible to dispute this is what we have axioms for. It’s about something that feels like it should be obvious: we can always pick something from a set. How could this not be true?

    If it is true, though, we get some unsavory conclusions. For example, it becomes possible to take a ball the size of an orange and slice it up. We slice using mathematical blades. They’re not halted by something as petty as the desire not to slice atoms down the middle. We can reassemble the pieces. Into two balls. And worse, it doesn’t require we do something like cut the orange into infinitely many pieces. We expect crazy things to happen when we let infinities get involved. No, though, we can do this cut-and-duplicate thing by cutting the orange into five pieces. When you hear that it’s hard to know whether to point to the big, distracting thing and run away. If we dump the Axiom of Choice we don’t have that problem. But can we do anything useful without the ability to make a choice like that?

    And we’ve learned that we can. If we want to use the Zermelo-Fraenkel Set Theory with the Axiom of Choice we say we were working in “ZFC”, Zermelo-Fraenkel-with-Choice. We don’t have to. If we don’t want to make any assumption about choices we say we’re working in “ZF”. Which to use depends on what one wants to use.

    Either way Zermelo and Fraenkel and Skolem established set theory on the foundation we use to this day. We’re not required to use them, no; there’s a construction called von Neumann-Bernays-Gödel Set Theory that’s supposed to be more elegant. They didn’t mention it in my logic classes that I remember, though.

    And still there’s important stuff we would like to know which even ZFC can’t answer. The most famous of these is the continuum hypothesis. Everyone knows — excuse me. That’s wrong. Everyone who would be reading a pop mathematics blog knows there are different-sized infinitely-large sets. And knows that the set of integers is smaller than the set of real numbers. The question is: is there a set bigger than the integers yet smaller than the real numbers? The Continuum Hypothesis says there is not.

    Zermelo-Fraenkel Set Theory, even though it’s all about the properties of sets, can’t tell us if the Continuum Hypothesis is true. But that’s all right; it can’t tell us if it’s false, either. Whether the Continuum Hypothesis is true or false stands independent of the rest of the theory. We can assume whichever state is more useful for our work.

    Back to the ideals of mathematics. One question that produced the Crisis of Foundations was consistency. How do we know our axioms don’t contain a contradiction? It’s hard to say. Typically a set of axioms we can prove consistent are also a set too boring to do anything useful in. Zermelo-Fraenkel Set Theory, with or without the Axiom of Choice, has a lot of interesting results. Do we know the axioms are consistent?

    No, not yet. We know some of the axioms are mutually consistent, at least. And we have some results which, if true, would prove the axioms to be consistent. We don’t know if they’re true. Mathematicians are generally confident that these axioms are consistent. Mostly on the grounds that if there were a problem something would have turned up by now. It’s withstood all the obvious faults. But the universe is vaster than we imagine. We could be wrong.

    It’s hard to live up to our ideals. After a generation of valiant struggling we settle into hoping we’re doing good enough. And waiting for some brilliant mind that can get us a bit closer to what we ought to be.

     
    • elkement (Elke Stangl) 10:42 am on Sunday, 1 January, 2017 Permalink | Reply

      Very interesting – as usual! I was also subjected to the New Math in elementary school – the upside was that you got a lot of nice toys for free, as ‘add-ons’ to school books ( … plastic cubes and other toy blocks that should represents members of sets …). Not sure if it prepared one better to understand Russell’s paradox later ;-)

      Liked by 1 person

      • elkement (Elke Stangl) 10:43 am on Sunday, 1 January, 2017 Permalink | Reply

        … and I wish you a Happy New Year and more A-Zs in 2017 :-)

        Liked by 1 person

        • Joseph Nebus 5:34 am on Thursday, 5 January, 2017 Permalink | Reply

          Thanks kindly. I am going to do a fresh A-to-Z, although I don’t know just when. Not in January; haven’t got the energy for it right away.

          Liked by 1 person

      • Joseph Nebus 5:34 am on Thursday, 5 January, 2017 Permalink | Reply

        Oh, now, the toys were fantastic. I suppose it’s a fair guess whether the people who got something out of the New Math got it because they understood fundamentals better in that form or whether it was just that the toys and games made the subject more engaging.

        I am, I admit, a fan of the New Math, but that may just be because it’s the way I learned mathematics, and the way you did something as a kid is always the one natural way to do it.

        Liked by 1 person

  • Joseph Nebus 6:00 pm on Monday, 7 November, 2016 Permalink | Reply
    Tags: , , , , , , , set theory   

    The End 2016 Mathematics A To Z: Cantor’s Middle Third 


    Today’s term is a request, the first of this series. It comes from HowardAt58, head of the Saving School Math blog. There are many letters not yet claimed; if you have a term you’d like to see my write about please head over to the “Any Requests?” page and pick a letter. Please not one I figure to get to in the next day or two.

    Cantor’s Middle Third.

    I think one could make a defensible history of mathematics by describing it as a series of ridiculous things that get discovered. And then, by thinking about these ridiculous things long enough, mathematicians come to accept them. Even rely on them. Sometime later the public even comes to accept them. I don’t mean to say getting people to accept ridiculous things is the point of mathematics. But there is a pattern which happens.

    Consider. People doing mathematics came to see how a number could be detached from a count or a measure of things. That we can do work on, say, “three” whether it’s three people, three kilograms, or three square meters. We’re so used to this it’s only when we try teaching mathematics to the young we realize it isn’t obvious.

    Or consider that we can have, rather than a whole number of things, a fraction. Some part of a thing, as if you could have one-half pieces of chalk or two-thirds a fruit. Counting is relatively obvious; fractions are something novel but important.

    We have “zero”; somehow, the lack of something is still a number, the way two or five or one-half might be. For that matter, “one” is a number. How can something that isn’t numerous be a number? We’re used to it anyway. We can have not just fraction and one and zero but irrational numbers, ones that can’t be represented as a fraction. We have negative numbers, somehow a lack of whatever we were counting so great that we might add some of what we were counting to the pile and still have nothing.

    That takes us up to about eight hundred years ago or something like that. The public’s gotten to accept all this as recently as maybe three hundred years ago. They’ve still got doubts. I don’t blame folks. Complex numbers mathematicians like; the public’s still getting used to the idea, but at least they’ve heard of them.

    Cantor’s Middle Third is part of the current edge. It’s something mathematicians are aware of and that defies sense at least. But we’ve come to accept it. The public, well, they don’t know about it. Maybe some do; it turns up in pop mathematics books that like sharing the strangeness of infinities. Few people read them. Sometimes it feels like all those who do go online to tell mathematicians they’re crazy. It comes to us, as you might guess from the name, from Georg Cantor. Cantor established the modern mathematical concept of how to study infinitely large sets in the late 19th century. And he was repeatedly hospitalized for depression. It’s cruel to write all that off as “and he was crazy”. His work’s withstood a hundred and thirty-five years of extremely smart people looking at it skeptically.

    The Middle Third starts out easily enough. Take a line segment. Then chop it into three equal pieces and throw away the middle third. You see where the name comes from. What do you have left? Some of the original line. Two-thirds of the original line length. A big gap in the middle.

    Now take the two line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the two pieces. Now we’re left with four chunks of line and four-ninths of the original length. One big and two little gaps in the middle.

    Now take the four little line segments. Chop each of them into three equal pieces. Throw away the middle thirds of the four pieces. We’re left with eight chunks of line, about eight-twenty-sevenths of the original length. Lots of little gaps. Keep doing this, chopping up line segments and throwing away middle pieces. Never stop. Well, pretend you never stop and imagine what’s left.

    What’s left is deeply weird. What’s left has no length, no measure. That’s easy enough to prove. But we haven’t thrown everything away. There are bits of the original line segment left over. The left endpoint of the original line is left behind. So is the right endpoint of the original line. The endpoints of the line segments after the first time we chopped out a third? Those are left behind. The endpoints of the line segments after chopping out a third the second time, the third time? Those have to be in the set. We have a dust, isolated little spots of the original line, none of them combining together to cover any length. And there are infinitely many of these isolated dots.

    We’ve seen that before. At least we have if we’ve read anything about the Cantor Diagonal Argument. You can find that among the first ten posts of every mathematics blog. (Not this one. I was saving the subject until I had something good to say about it. Then I realized many bloggers have covered it better than I could.) Part of it is pondering how there can be a set of infinitely many things that don’t cover any length. The whole numbers are such a set and it seems reasonable they don’t cover any length. The rational numbers, though, are also an infinitely-large set that doesn’t cover any length. And there’s exactly as many rational numbers as there are whole numbers. This is unsettling but if you’re the sort of person who reads about infinities you come to accept it. Or you get into arguments with mathematicians online and never know you’ve lost.

    Here’s where things get weird. How many bits of dust are there in this middle third set? It seems like it should be countable, the same size as the whole numbers. After all, we pick up some of these points every time we throw away a middle third. So we double the number of points left behind every time we throw away a middle third. That’s countable, right?

    It’s not. We can prove it. The proof looks uncannily like that of the Cantor Diagonal Argument. That’s the one that proves there are more real numbers than there are whole numbers. There are points in this leftover set that were not endpoints of any of these middle-third excerpts. This dust has more points in it than there are rational numbers, but it covers no length.

    (I don’t know if the dust has the same size as the real numbers. I suspect it’s unproved whether it has or hasn’t, because otherwise I’d surely be able to find the answer easily.)

    It’s got other neat properties. It’s a fractal, which is why someone might have heard of it, back in the Great Fractal Land Rush of the 80s and 90s. Look closely at part of this set and it looks like the original set, with bits of dust edging gaps of bigger and smaller sizes. It’s got a fractal dimension, or “Hausdorff dimension” in the lingo, that’s the logarithm of two divided by the logarithm of three. That’s a number actually known to be transcendental, which is reassuring. Nearly all numbers are transcendental, but we only know a few examples of them.

    HowardAt58 asked me about the Middle Third set, and that’s how I’ve referred to it here. It’s more often called the “Cantor set” or “Cantor comb”. The “comb” makes sense because if you draw successive middle-thirds-thrown-away, one after the other, you get something that looks kind of like a hair comb, if you squint.

    You can build sets like this that aren’t based around thirds. You can, for example, develop one by cutting lines into five chunks and throw away the second and fourth. You get results that are similar, and similarly heady, but different. They’re all astounding. They’re all hard to believe in yet. They may get to be stuff we just accept as part of how mathematics works.

     
  • Joseph Nebus 3:00 pm on Saturday, 16 April, 2016 Permalink | Reply
    Tags: density, , , set theory   

    More Things To Read 


    My long streak of posting something every day will end. There’s just no keeping up mathematics content like this indefinitely, not with my stamina. But it’s not over just quite yet. I wanted to share some stuff that people had brought to my attention and that’s just too interesting to pass up.

    The first comes from … I’m not really sure. I lost my note about wherever it did come from. It’s from the Continuous Everywhere But Differentiable Nowhere blog. It’s about teaching the Crossed Chord Theorem. It’s one I had forgotten about, if I heard it in the first place. The result is one of those small, neat things and it’s fun to work through how and why it might be true.

    Next comes from a comment by Gerry on a rather old article, “What’s The Worst Way To Pack?” Gerry located a conversation on MathOverflow.net that’s about finding low-density packings of discs, circles, on the plane. As these sorts of discussions go, it gets into some questions about just what we mean by packings, and whether Wikipedia has typos. This is normal for discovering new mathematics. We have to spend time pinning down just what we mean to talk about. Then we can maybe figure out what we’re saying.

    And the last I picked up from Elke Stangl, of what’s now known as the elkemental Force blog. She had pointed me first to lecture notes from Dr Scott Aaronson which try to explain quantum mechanics from starting principles. Normally, almost invariably, they’re taught in historical sequence. Aaronson here skips all the history to look at what mathematical structures make quantum mechanics make sense. It’s not for casual readers, I’m afraid. It assumes you’re comfortable with things like linear transformations and p-norms. But if you are, then it’s a great overview. I figure to read it over several more times myself.

    Those notes are from a class in Quantum Computing. I haven’t had nearly the time to read them all. But the second lecture in the series is on Set Theory. That’s not quite friendly to a lay audience, but it is friendlier, at least.

     
  • Joseph Nebus 3:00 pm on Friday, 15 April, 2016 Permalink | Reply
    Tags: , countability, , , , , power sets, set theory   

    A Leap Day 2016 Mathematics A To Z: Uncountable 


    I’m drawing closer to the end of the alphabet. While I have got choices for ‘V’ and ‘W’ set, I’ll admit that I’m still looking for something that inspires me in the last couple letters. Such inspiration might come from anywhere. HowardAt58, of that WordPress blog, gave me the notion for today’s entry.

    Uncountable.

    What are we doing when we count things?

    Maybe nothing. We might be counting just to be doing something. Or we might be counting because we want to do nothing. Counting can be a good way into a restful state. Fair enough. Just because we do something doesn’t mean we care about the result.

    Suppose we do care about the result of our counting. Then what is it we do when we count? The mechanism is straightforward enough. We pick out things and say, or imagine saying, “one, two, three, four,” and so on. Or we at least imagine the numbers along with the things being numbered. When we run out of things to count, we take whatever the last number was. That’s how many of the things there were. Why are there eight light bulbs in the chandelier fixture above the dining room table? Because there are not nine.

    That’s how lay people count anyway. Mathematicians would naturally have a more sophisticated view of the business. A much more powerful counting scheme. Concepts in counting that go far beyond what you might work out in first grade.

    Yeah, so that’s what most of us would figure. Things don’t get much more sophisticated than that, though. This probably is because the idea of counting is tied to the theory of sets. And the theory of sets grew, in part, to come up with a logically solid base for arithmetic. So many of the key ideas of set theory are so straightforward they hardly seem to need explaining.

    We build the idea of “countable” off of the nice, familiar numbers 1, 2, 3, and so on. That set’s called the counting numbers. They’re the numbers that everybody seems to recognize as numbers. Not just people. Even animals seem to understand at least the first couple of counting numbers. Sometimes these are called the natural numbers.

    Take a set of things we want to study. We’re interested in whether we can match the things in that set one-to-one with the things in the counting numbers. We don’t have to use all the counting numbers. But we can’t use the same counting number twice. If we’ve matched one chandelier light bulb with the number ‘4’, we mustn’t match a different bulb with the same number. Similarly, if we’ve got the number ‘4’ matched to one bulb, we mustn’t match ‘4’ with another bulb at the same time.

    If we can do this, then our set’s countable. If we really wanted, we could pick the counting numbers in order, starting from 1, and match up all the things with counting numbers. If we run out of things, then we have a finitely large set. The last number we used to match anything up with anything is the size, or in the jargon, the cardinality of our set. We might not care about the cardinality, just whether the set is finite. Then we can pick counting numbers as we like in no particular order. Just use whatever’s convenient.

    But what if we don’t run out of things? And it’s possible we won’t. Suppose our set is the negative whole numbers: -1, -2, -3, -4, -5, and so on. We can match each of those to a counting number many ways. We always can. But there’s an easy way. Match -1 to 1, match -2 to 2, match -3 to 3, and so on. Why work harder than that? We aren’t going to run out of negative whole numbers. And we aren’t going to find any we can’t match with some counting number. And we aren’t going to have to match two different negative numbers to the same counting number. So what we have here is an infinitely large, yet still countable, set.

    So a set of things can be countable and finite. It can be countable and infinite. What else is there to be?

    There must be something. It’d be peculiar to have a classification that everything was in, after all. At least it would be peculiar except for people studying what it means to exist or to not exist. And most of those people are in the philosophy department, where we’re scared of visiting. So we must mean there’s some such thing as an uncountable set.

    The idea means just what you’d guess if you didn’t know enough mathematics to be tricky. Something is uncountable if it can’t be counted. It can’t be counted if there’s no way to match it up, one thing-to-one thing, with the counting numbers. We have to somehow run out of counting numbers.

    It’s not obvious that we can do that. Some promising approaches don’t work. For example, the set of all the integers — 1, 2, 3, 4, 5, and all that, and 0, and the negative numbers -1, -2, -3, -4, -5, and so on — is still countable. Match the counting number 1 to 0. Match the counting number 2 to 1. Match the counting number 3 to -1. Match 4 to 2. Match 5 to -2. Match 6 to 3. Match 7 to -3. And so on.

    Even ordered pair of the counting numbers don’t do it. We can match the counting number 1 to the pair (1, 1). Match the counting number 2 to the pair (2, 1). Match the counting number 3 to (1, 2). Match 4 to (3, 1). Match 5 to (2, 2). Match 6 to (1, 3). Match 7 to (4, 1). Match 8 to (3, 2). And so on. We can achieve similar staggering results with ordered triplets, quadruplets, and more. Ordered pairs of integers, positive and negative? Longer to do, yes, but just as doable.

    So are there any uncountable things?

    Sure. Wouldn’t be here if there weren’t. For example: think about the set that’s all the ways to pick things from a set. I sense your confusion. Let me give you an example. Suppose we have the set of three things. They’re the numbers 1, 2, and 3. We can make a bunch of sets out of things from this set. We can make the set that just has ‘1’ in it. We can make the set that just has ‘2’ in it. Or the set that just has ‘3’ in it. We can also make the set that has just ‘1’ and ‘2’ in it. Or the set that just has ‘2’ and 3′ in it. Or the set that just has ‘3’ and ‘1’ in it. Or the set that has all of ‘1’, ‘2’, and ‘3’ in it. And we can make the set that hasn’t got any of these in it. (Yes, that does too count as a set.)

    So from a set of three things, we were able to make a collection of eight sets. If we had a set of four things, we’d be able to make a collection of sixteen sets. With five things to start from, we’d be able to make a collection of thirty-two sets. This collection of sets we call the “power set” of our original set, and if there’s one thing we can say about it, it’s that it’s bigger than the set we start from.

    The power set for a finite set, well, that’ll be much bigger. But it’ll still be finite. Still be countable. What about the power set for an infinitely large set?

    And the power set of the counting numbers, the collection of all the ways you can make a set of counting numbers, is really big. Is it uncountably big?

    Let’s step back. Remember when I said mathematicians don’t get “much more” sophisticated than matching up things to the counting numbers? Here’s a little bit of that sophistication. We don’t have to match stuff up to counting numbers if we like. We can match the things in one set to the things in another set. If it’s possible to match them up one-to-one, with nothing missing in either set, then the two sets have to be the same size. The same cardinality, in the jargon.

    So. The set of the numbers 1, 2, 3, has to have a smaller cardinality than its power set. Want to prove it? Do this exactly the way you imagine. You run out of things in the original set before you run out of things in the power set, so there’s no making a one-to-one matchup between the two.

    With the infinitely large yet countable set of the counting numbers … well, the same result holds. It’s harder to prove. You have to show that there’s no possible way to match the infinitely many things in the counting numbers to the infinitely many things in the power set of the counting numbers. (The easiest way to do this is by contradiction. Imagine that you have made such a matchup, pairing everything in your power set to everything in the counting numbers. Then you go through your matchup and put together a collection that isn’t accounted for. Whoops! So you must not have matched everything up in the first place. Why not? Because you can’t.)

    But the result holds. The power set of the counting numbers is some other set. It’s infinitely large, yes. And it’s so infinitely large that it’s somehow bigger than the counting numbers. It is uncountable.

    There’s more than one uncountably large set. Of course there are. We even know of some of them. For example, there’s the set of real numbers. Three-quarters of my readers have been sitting anxiously for the past eight paragraphs wondering if I’d ever get to them. There’s good reason for that. Everybody feels like they know what the real numbers are. And the proof that the real numbers are a larger set than the counting numbers is easy to understand. An eight-year-old could master it. You can find that proof well-explained within the first ten posts of pretty much every mathematics blog other than this one. (I was saving the subject. Then I finally decided I couldn’t explain it any better than everyone else has done.)

    Are the real numbers the same size, the same cardinality, as the power set of the counting numbers?

    Sure, they are.

    No, they’re not.

    Whichever you like. This is one of the many surprising mathematical results of the surprising 20th century. Starting from the common set of axioms about set theory, it’s undecidable whether the set of real numbers is as big as the power set of the counting numbers. You can assume that it is. This is known as the Continuum Hypothesis. And you can do fine mathematical work with it. You can assume that it is not. This is known as the … uh … Rejecting the Continuum Hypothesis. And you can do fine mathematical work with that. What’s right depends on what work you want to do. Either is consistent with the starting hypothesis. You are free to choose either, or if you like, neither.

    My understanding is that most set theory finds it more productive to suppose that they’re not the same size. I don’t know why this is. I know enough set theory to lead you to this point, but not past it.

    But that the question can exist tells you something fascinating. You can take the power set of the power set of the counting numbers. And this gives you another, even vaster, uncountably large set. As enormous as the collection of all the ways to pick things out of the counting numbers is, this power set of the power set is even vaster.

    We’re not done. There’s the power set of the power set of the power set of the counting numbers. And the power set of that. Much as geology teaches us to see Deep Time, and astronomy Deep Space, so power sets teach us to see Deep … something. Deep Infinity, perhaps.

     
  • Joseph Nebus 3:00 pm on Wednesday, 25 November, 2015 Permalink | Reply
    Tags: , , , , set theory, ,   

    The Set Tour, Part 9: Balls, Only The Insides 


    Last week in the tour of often-used domains I talked about Sn, the surfaces of spheres. These correspond naturally to stuff like the surfaces of planets, or the edges of surfaces. They are also natural fits if you have a quantity that’s made up of a couple of components, and some total amount of the quantity is fixed. More physical systems do that than you might have guessed.

    But this is all the surfaces. The great interior of a planet is by definition left out of Sn. This gives away the heart of what this week’s entry in the set tour is.

    Bn

    Bn is the domain that’s the interior of a sphere. That is, B3 would be all the points in a three-dimensional space that are less than a particular radius from the origin, from the center of space. If we don’t say what the particular radius is, then we mean “1”. That’s just as with the Sn we meant the radius to be “1” unless someone specifically says otherwise. In practice, I don’t remember anyone ever saying otherwise when I was in grad school. I suppose they might if we were doing a numerical simulation of something like the interior of a planet. You know, something where it could make a difference what the radius is.

    It may have struck you that B3 is just the points that are inside S2. Alternatively, it might have struck you that S2 is the points that are on the edge of B3. Either way is right. Bn and Sn-1, for any positive whole number n, are tied together, one the edge and the other the interior.

    Bn we tend to call the “ball” or the “n-ball”. Probably we hope that suggests bouncing balls and baseballs and other objects that are solid throughout. Sn we tend to call the “sphere” or the “n-sphere”, though I admit that doesn’t make a strong case for ruling out the inside of the sphere. Maybe we should think of it as the surface. We don’t even have to change the letter representing it.

    As the “n” suggests, there are balls for as many dimensions of space as you like. B2 is a circle, filled in. B1 is just a line segment, stretching out from -1 to 1. B3 is what’s inside a planet or an orange or an amusement park’s glass light fixture. B4 is more work than I want to do today.

    So here’s a natural question: does Bn include Sn-1? That is, when we talk about a ball in three dimensions, do we mean the surface and everything inside it? Or do we just mean the interior, stopping ever so short of the surface? This is a division very much like dividing the real numbers into negative and positive; do you include zero among other set?

    Typically, I think, mathematicians don’t. If a mathematician speaks of B3 without saying otherwise, she probably means the interior of a three-dimensional ball. She’s not saying anything one way or the other about the surface. This we name the “open ball”, and if she wants to avoid any ambiguity she will say “the open ball Bn”.

    “Open” here means the same thing it does when speaking of an “open set”. That may not communicate well to people who don’t remember their set theory. It means that the edges aren’t included. (Warning! Not actual set theory! Do not attempt to use that at your thesis defense. That description was only a reference to what’s important about this property in this particular context.)

    If a mathematician wants to talk about the ball and the surface, she might say “the closed ball Bn”. This means to take the surface and the interior together. “Closed”, again, here means what it does in set theory. It pretty much means “include the edges”. (Warning! See above warning.)

    Balls work well as domains for functions that have to describe the interiors of things. They also work if we want to talk about a constraint that’s made up of a couple of components, and that can be up to some size but not larger. For example, suppose you may put up to a certain budget cap into (say) six different projects, but you aren’t required to use the entire budget. We could model your budgeting as finding the point in B6 that gets the best result. How you measure the best is a problem for your operations research people. All I’m telling you is how we might represent the study of the thing you’re doing.

     
    • ivasallay 4:38 pm on Wednesday, 25 November, 2015 Permalink | Reply

      I didn’t know any of this before, but it was well written and easy enough to understand. Thanks.

      Liked by 1 person

      • Joseph Nebus 6:23 am on Saturday, 28 November, 2015 Permalink | Reply

        Thank you. I’m most glad to hear it. I’m surprised how many of this sequence I keep finding I should write.

        Liked by 1 person

  • Joseph Nebus 12:00 pm on Sunday, 25 October, 2015 Permalink | Reply
    Tags: , , , , , set theory   

    Reading the Comics, October 22, 2015: Foundations Edition 


    I am, yes, saddened to hear that Apartment 3-G is apparently shuffling off to a farm upstate. There it will be visited by a horrifying kangaroo-deer-fox-demon. And an endless series of shots of two talking heads saying they should go outside, when they’re already outside. But there are still many comic strips running, on Gocomics.com and on Comics Kingdom. They’ll continue to get into mathematically themed subjects. And best of all I can use a Popeye strip to talk about the logical foundations of mathematics and what computers can do for them.

    Jef Mallett’s Frazz for the 18th of October carries on the strange vendetta against “showing your work”. If you do read through the blackboard-of-text you’ll get some fun little jokes. I like the explanation of how “obscure calculus symbols” could be used, “And a Venn diagram!” Physics majors might notice the graph on the center-right, to the right of the DNA strand. That could show many things, but the one most plausible to me is a plot of the velocity and the position of an object undergoing simple harmonic motion.

    Still, I do wonder what work Caulfield would show if the problem were to say what fraction were green apples, if there were 57 green and 912 red apples. There are levels where “well, duh” will not cut it. In case “well, duh” does cut it, then a mathematician might say the answer is “obvious”. But she may want to avoid the word “obvious”, which has a history of being dangerously flexible. She might then say “by inspection”. That means, basically, look at it and yeah, of course that’s right.

    Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 18th of October uses mathematics as the quick way to establish “really smart”. It doesn’t take many symbols this time around, curiously. Superstar Equation E = mc2 appears in a misquoted form. At first that seems obvious, since if there were an equals sign in the denominator the whole expression would not parse. Then, though, you notice: if E and m and c mean what they usually do in the Superstar Equation, then, “E – mc2” is equal to zero. It shouldn’t be in the denominator anyway. So, the big guy has to be the egghead.

    Peter Maresca’s Origins of the Sunday Comics for the 18th of October reprints one of Windsor McCay’s Dreams of the Rarebit Fiend strips. As normal for Dreams and so much of McCay’s best work, it’s a dream-to-nightmare strip. And this one gives a wonderful abundance of numerals, and the odd letter, to play with. Mathematical? Maybe not. But it is so merrily playful it’d be a shame not to include.

    Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 20th of October is a joke soundly in set theory. It also feels like it’s playing with a set-theory-paradox problem but I can’t pin down which one exactly. It feels most like the paradox of “find the smallest uninteresting counting number”. But being the smallest uninteresting counting number would be an interesting property to have. So any candidate number has to count as interesting. It also feels like it’s circling around the heap paradox. Take a heap of sand and remove one grain, and you still have a heap of sand. But if you keep doing that, at some point, you have just one piece of sand from the original pile, and that is no heap. When does the heap move away?

    Daniel Shelton’s Ben for the 21st of October is a teaching-arithmetic problem, using jellybeans. And fractions. Well, real objects can do wonders in connecting a mathematical abstraction to something one has an intuition for. One just has to avoid unwanted connotations and punching.

    Doug Savage’s Savage Chickens for the 21st of October uses “mathematics homework” as the emblem of the hardest kind of homework there might ever be. I saw the punch line coming a long while off, but still laughed.

    The Sea Hag is dismissive of scientists, who try to take credit for magic even though 'they can't even THINK! They have to use machines to tell them what two plus two is!! And another machine to prove the first one was right!'

    Bud Sagendorf’s Popeye for the 22nd of October, 2015. The strip originally ran sometime in 1981. This would be only a few years after the Four-Color Theorem was solved by computer. The computer did this by trying out all the possibilities and reporting everything was OK.

    Bud Sagendorf’s Popeye began what it billed as a new story, “Science Vs Sorcery”, on Monday the 19th. I believe it’s properly a continuation of the previous story, though, “Back-Room Pest!” which began the 13th of July. “Back-Room Pest!”, according to my records, originally ran from the 27th of July, 1981, through to the 23rd of January, 1982. So there’s obviously time missing. And this story, like “Back-Room Pest”, features nutty inventor Professor O G Wotasnozzle. I know, I know, you’re all deeply interested in working out correct story guides for this.

    Anyway, the Sea Hag in arguing against scientists claims “they can’t even think! They have to use machines to tell them what two plus two is!! And another machine to prove the first one was right!” It’s a funny line and remarkably pointed for an early-80s Popeye comic. The complaint that computers leave one unable to do even simple reasoning is an old one, of course. The complaint has been brought against every device or technique that promises to lighten a required mental effort. It seems to me similar to the way new kinds of weapons are accused of making war too monstrous and too unchivalrous, too easily done by cowards. I suppose it’s also the way a fable like the story of John Henry hold up human muscle against the indignity of mechanical work.

    The crack about needing another machine to prove the first was right is less usual, though. Sagendorf may have meant to be whimsically funny, but he hit on something true. One of the great projects of late 19th and early 20th century mathematics was the attempt to place its foundations on strict logic, independent of all human intuition. (Intuition can be a great guide, but it can lead one astray.) Out of this came a study of proofs as objects, as mathematical constructs which must themselves follow certain rules.

    And here we reach a spooky borderland between mathematics and sorcery. We can create a proof system that is, in a way, a language with a grammar. A string of symbols that satisfies all the grammatical rules is itself a proof, a valid argument following from the axioms of the system. (The axioms are some basic set of statements which we declare to be true by assumption.) And it does not matter how the symbols are assembled: by mathematician, by undergrad student worker, by monkey at a specialized typewriter, by a computer stringing things together. Once a grammatically valid string of symbols is done, that string of symbols is a theorem, with its proof written out. The proof is the string of symbols that is the theorem written out. If it were not for the modesty of what is claimed to be done — proofs about arithmetic or geometry or the like — one might think we had left behind mathematics and were now summoning demons by declaring their True Names. Or risk the stars overhead going out, one by one.

    So it is possible to create a machine that simply grinds out proofs. Or, since this is the 21st century, a computer that does that. If the computer is given no guidance it may spit out all sorts of theorems that are true but boring. But we can set up a system by which the computer, by itself, works out whether a given theorem does follow from the axioms of mathematics. More, this has been done. It’s a bit of a pain, because any proofs that are complicated enough to really need checking involve an incredible number of steps. But for a challenging enough proof it is worth doing, and automated proof checking is one of the tools mathematicians can now draw on.

    Of course, then we have the problem of knowing that the computer is carrying out its automatic-proof programming correctly. I’m not stepping into that kind of trouble.

    The attempt to divorce mathematics from all human intuition was a fruitful one. The most awe-inspiring discovery to come from it is surely that of incompleteness. Any mathematical system interesting enough will contain within it statements that are true, but can’t be proven true from the axioms.

    Georgia Dunn’s Breaking Cat News for the 22nd of October features a Venn Diagram. It’s part of how cats attempt to understand toddlers. My understanding is that their work is correct.

     
  • Joseph Nebus 11:16 pm on Tuesday, 3 March, 2015 Permalink | Reply
    Tags: , , , , , set theory, theology, typesetting   

    How To Build Infinite Numbers 


    I had missed it, as mentioned in the above tweet. The link is to a page on the Form And Formalism blog, reprinting a translation of one of Georg Cantor’s papers in which he founded the modern understanding of sets, of infinite sets, and of infinitely large numbers. Although it gets into pretty heady topics, it doesn’t actually require a mathematical background, at least as I look at it; it just requires a willingness to follow long chains of reasoning, which I admit is much harder than algebra.

    Cantor — whom I’d talked a bit about in a recent Reading The Comics post — was deeply concerned and intrigued by infinity. His paper enters into that curious space where mathematics, philosophy, and even theology blend together, since it’s difficult to talk about the infinite without people thinking of God. I admit the philosophical side of the discussion is difficult for me to follow, and the theological side harder yet, but a philosopher or theologian would probably have symmetric complaints.

    The translation is provided as scans of a typewritten document, so you can see what it was like trying to include mathematical symbols in non-typeset text in the days before LaTeX (which is great at it, but requires annoying amounts of setup) or HTML (which is mediocre at it, but requires less setup) or Word (I don’t use Word) were available. Somehow, folks managed to live through times like that, but it wasn’t pretty.

     
    • elkement 11:03 am on Sunday, 8 March, 2015 Permalink | Reply

      I remember that stuff – as one of the most intriguing things I learned in the Linear Algebra class in the first semester.

      Like

      • Joseph Nebus 11:51 pm on Monday, 9 March, 2015 Permalink | Reply

        Linear Algebra? I’m intrigued it was put in that course. In my curriculum they were fit into real analysis and mathematical logic instead.

        Liked by 1 person

        • elkement 10:59 am on Tuesday, 10 March, 2015 Permalink | Reply

          It was somewhere in the same chapter / lecture as different types of sets, infinite sets, and Russell’s paradox of the set of all sets and related proof of the inherent contradition …

          Like

          • Joseph Nebus 7:59 pm on Thursday, 12 March, 2015 Permalink | Reply

            Ah, I see. I wouldn’t have thought to connect the topics quite that way, although it’s possible I’m just thinking too heavily of how it happened to be done the semesters I took linear algebra, which were pretty heavily biased towards the sorts of matrix and vector space stuff that would be helpful in physics. Maybe I failed to read the chapters the professor chose to skip.

            (I didn’t have much choice: I lost my textbook after the first exam and couldn’t buy or borrow a second copy. Luckily homeworks were assigned by actually writing out the problems, rather than just ‘Chapter 2.3 3-9 odds, 12, 14’, so I could keep up, but it was tougher than it needed to be. I’m not positive the professor wasn’t kind to me with my final grade, or whether having to pay extremely close attention to definitions and proofs in class was better for me than trusting I could check the details in the textbook later on.)

            Liked by 1 person

            • elkement 8:21 pm on Thursday, 12 March, 2015 Permalink | Reply

              Yes, the lecture was mainly matrices, vector spaces, and tensors. The set of sets and Cantor’s diagonal argument etc. were mentioned in one of the first chapters if I recall correctly. Russell’s proof (or some version of it) required mapping elements of a set onto their power set (or something ;-)) so this was introduced right after surjective and injective linear maps.

              I am too lazy now, but I could check – I still do have this textbook, but it is literally falling apart!

              Like

              • Joseph Nebus 3:17 am on Saturday, 14 March, 2015 Permalink | Reply

                Ah, OK. Now I see where it’d fit naturally in with the way the instructor was leading the course. It wasn’t something I had expected but I do see how that makes sense.

                Liked by 1 person

            • elkement 8:57 pm on Thursday, 12 March, 2015 Permalink | Reply

              … and as you speak about Real Analysis this is maybe the time to ask a perfectly stupid question, but blame it on differences in our educational systems and my ignorance thereof: When I was a student of physics, I had two math classes in the first year, Linear Algebra and Real Analysis (two semesters each, first and second of my “undergrad” studies, though we had no bachelor degrees back then, only masters – this was just the first year of five).
              So I always thought “Calculus” = “Real Analysis”. But it isn’t, right?
              I know this may sound dumb but I have tacitly made this assumption so often, so I admit my blunder publicly now :-)

              Real analysis was mainly theorems and proofs, “building math from scratch”, series and functions, their properties – continuous, differentiable etc.
              Is “Calculus” more about learning rules how to integrate and differentiate, but without all those detailed proofs? I started thinking about it when I read a book by a science writer (an English major) who tought herself calculus later. It seems it had not been mandatory in her high school. Then I’d understand why colleges would have to teach calculus to make sure everybody has the same background. I can remember I had a few colleagues who came from a high school not at all specialized in science. We have e.g. something like “business highschools”, with accounting classes and the like…. but those students were unlikely to pick a science degree at the university so perhaps nobody cared that they had a really hard time with that rigorous, proof-based math right from day 1.

              Like

              • Joseph Nebus 3:24 am on Saturday, 14 March, 2015 Permalink | Reply

                I had to think about this one a bit, but I believe there is a subtle difference between Calculus and Real Analysis. Real Analysis is the study of real-valued functions — how to define them, how to use them, how to manipulate them. But the most interesting stuff to do with real-valued functions that you can teach with the sorts of proofs that new students can follow or reconstruct are generally the things that we get in intro calculus: finding maximums and minimums, finding derivatives, integrating, that sort of thing. So Real Analysis tends to look like “Intro Calculus, only this time you have to do the proofs”.

                Liked by 1 person

  • Joseph Nebus 8:46 pm on Thursday, 20 November, 2014 Permalink | Reply
    Tags: Antikythera Mechanism, , , , , Julian calendar, mechanisms, , set theory, soup   

    Reading the Comics, November 20, 2014: Ancient Events Edition 


    I’ve got enough mathematics comics for another roundup, and this time, the subjects give me reason to dip into ancient days: one to the most famous, among mathematicians and astronomers anyway, of Greek shipwrecks, and another to some point in the midst of winter nearly seven thousand years ago.

    Eric the Circle (November 15) returns “Griffinetsabine” to the writer’s role and gives another “Shape Single’s Bar” scene. I’m amused by Eric appearing with his ex: x is practically the icon denoting “this is an algebraic expression”, while geometry … well, circles are good for denoting that, although I suspect that triangles or maybe parallelograms are the ways to denote “this is a geometric expression”. Maybe it’s the little symbol for a right angle.

    Jim Meddick’s Monty (November 17) presents Monty trying to work out just how many days there are to Christmas. This is a problem fraught with difficulties, starting with the obvious: does “today” count as a shopping day until Christmas? That is, if it were the 24th, would you say there are zero or one shopping days left? Also, is there even a difference between a “shopping day” and a “day” anymore now that nobody shops downtown so it’s only the stores nobody cares about that close on Sundays? Sort all that out and there’s the perpetual problem in working out intervals between dates on the Gregorian calendar, which is that you have to be daft to try working out intervals between dates on the Gregorian calendar. The only worse thing is trying to work out the intervals between Easters on it. My own habit for this kind of problem is to use the United States Navy’s Julian Date conversion page. The Julian date is a straight serial number, counting the number of days that have elapsed since noon Universal Time at what’s called the 1st of January, 4713 BCE, on the proleptic Julian calendar (“proleptic” because nobody around at the time was using, or even imagined, the calendar, but we can project back to what date that would have been), a year picked because it’s the start of several astronomical cycles, and it’s way before any specific recordable dates in human history, so any day you might have to particularly deal with has a positive number. Of course, to do this, we’re transforming the problem of “counting the number of days between two dates” to “counting the number of days between a date and January 1, 4713 BCE, twice”, but the advantage of that is, the United States Navy (and other people) have worked out how to do that and we can use their work.

    Bill Hind’s kids-sports comic Cleats (November 19, rerun) presents Michael offering basketball advice that verges into logic and set theory problems: making the ball not go to a place outside the net is equivalent to making the ball go inside the net (if we decide that the edge of the net counts as either inside or outside the net, at least), and depending on the problem we want to solve, it might be more convenient to think about putting the ball into the net, or not putting the ball outside the net. We see this, in logic, in a set of relations called De Morgan’s Laws (named for Augustus De Morgan, who put these ideas in modern mathematical form), which describe what kinds of descriptions — “something is outside both sets A and B at one” or “something is not inside set A or set B”, or so on — represent the same relationship between the thing and the sets.

    Tom Thaves’s Frank and Ernest (November 19) is set in the classic caveman era, with prehistoric Frank and Ernest and someone else discovering mathematics and working out whether a negative number times a negative number might be positive. It’s not obvious right away that they should, as you realize when you try teaching someone the multiplication rules including negative numbers, and it’s worth pointing out, a negative times a negative equals a positive because that’s the way we, the users of mathematics, have chosen to define negative numbers and multiplication. We could, in principle, have decided that a negative times a negative should give us a negative number. This would be a different “multiplication” (or a different “negative”) than we use, but as long as we had logically self-consistent rules we could do that. We don’t, because it turns out negative-times-negative-is-positive is convenient for problems we like to do. Mathematics may be universal — something following the same rules we do has to get the same results we do — but it’s also something of a construct, and the multiplication of negative numbers is a signal of that.

    Goofy sees the message 'buried treasure in back yard' in his alphabet soup; what are the odds of that?

    The Mickey Mouse comic rerun the 20th of November, 2014.

    Mickey Mouse (November 20, rerun) — I don’t know who wrote or draw this, but Walt Disney’s name was plastered onto it — sees messages appearing in alphabet soup. In one sense, such messages are inevitable: jumble and swirl letters around and eventually, surely, any message there are enough letters for will appear. This is very similar to the problem of infinite monkeys at typewriters, although with the special constraint that if, say, the bowl has only two letters “L”, it’s impossible to get the word “parallel”, unless one of the I’s is doing an impersonation. Here, Goofy has the message “buried treasure in back yard” appear in his soup; assuming those are all the letters in his soup then there’s something like 44,881,973,505,008,615,424 different arrangements of letters that could come up. There are several legitimate messages you could make out of that (“treasure buried in back yard”, “in back yard buried treasure”), not to mention shorter messages that don’t use all those letters (“run back”), but I think it’s safe to say the number of possible sentences that make sense are pretty few and it’s remarkable to get something like that. Maybe the cook was trying to tell Goofy something after all.

    Mark Anderson’s Andertoons (November 20) is a cute gag about the dangers of having too many axes on your plot.

    Gary Delainey and Gerry Rasmussen’s Betty (November 20) mentions the Antikythera Mechanism, one of the most famous analog computers out there, and that’s close enough to pure mathematics for me to feel comfortable including it here. The machine was found in April 1900, in ancient shipwreck, and at first seemed to be just a strange lump of bronze and wood. By 1902 the archeologist Valerios Stais noticed a gear in the mechanism, but since it was believed the wreck far, far predated any gear mechanisms, the machine languished in that strange obscurity that a thing which can’t be explained sometimes suffers. The mechanism appears to be designed to be an astronomical computer, tracking the positions of the Sun and the Moon — tracking the actual moon rather than an approximate mean lunar motion — the rising and etting of some constellations, solar eclipses, several astronomical cycles, and even the Olympic Games. It’s an astounding mechanism, it’s mysterious: who made it? How? Are there others? What happened to them? How was the mechanical engineering needed for this developed, and what other projects did the people who created this also do? Any answers to these questions, if we ever know them, seem sure to be at least as amazing as the questions are.

     
  • Joseph Nebus 9:09 pm on Friday, 29 June, 2012 Permalink | Reply
    Tags: false, falsity, implication, , inference, introduction to logic, , libel, , logic class, logical implication, mathematical logic, null sets, , proposition, set theory, true, ,   

    Why You Failed Your Logic Test 


    An interesting parallel’s struck me between nonexistent things and the dead: you can say anything you want about them. At least in United States law it’s not possible to libel the dead, since they can’t be hurt by any loss of reputation. That parallel doesn’t lead me anywhere obviously interesting, but I’ll take it anyway. At least it lets me start this discussion without too closely recapitulating the previous essay. The important thing is that at least in a logic class, if I say, “all the coins in this purse are my property”, as Lewis Carroll suggested, I’m asserting something I say is true without claiming that there are any coins in there. Further, I could also just as easily said “all the coins in this purse are not my property” and made as true a statement, as long as there aren’t any coins there.

    (More …)

     
    • Chip Uni 9:32 pm on Friday, 29 June, 2012 Permalink | Reply

      Aside from the standard logic, there are three ‘alternative’ definitions of logical implication possible:

      A=T, B=T A=T, B=F A=F, B=T A=F, B=F definition
      normal T F T T -A | B
      (1) T F T F B
      (2) T F F T A == B
      (3) T F F F A & B

      What happens to logic if we use any of these alternate definitions?

      Like

      • Joseph Nebus 9:55 pm on Thursday, 5 July, 2012 Permalink | Reply

        I haven’t the chance to work it out this week since awfully high priority things are competing with the blog but I’ll try thinking it out when I can.

        Like

c
Compose new post
j
Next post/Next comment
k
Previous post/Previous comment
r
Reply
e
Edit
o
Show/Hide comments
t
Go to top
l
Go to login
h
Show/Hide help
shift + esc
Cancel
%d bloggers like this: