How Many Of This Weird Prime Are There?

A friend made me aware of a neat little unsolved problem in number theory. I know it seems like number theory is nothing but unsolved problems, but this is an unfair reputation. There are as many as four solved problems in number theory. It’s a tough field.

The question started with the observation that 11 is a prime number. And so is 101. But 1,001 is not; nor is 10,001. How many prime numbers are there that have the form 10^n + 1 , for whole-number values of n? Are there infinitely many? Finitely many? If there’s finitely many, how many are there?

It turns out this is an open question. We know of three prime numbers that you can write as 10^n + 1 . I’ll leave the third for you to find.

One neat bit is that if there are more 10^n + 1 prime numbers, they have to be ones where n is itself a whole power of 2. That is, where the number is 10^{2^k} + 1 for some whole number k. They’ve been tested up to 10^{16,777,216} + 1 at least, so this subset of the Generalized Fermat Numbers seems to be rare. But wouldn’t it be just our luck if from 10^{16,777,217} + 1 onward they were nothing but primes?

My All 2020 Mathematics A to Z: Michael Atiyah

To start this year’s great glossary project Mr Wu, author of the blog, had a great suggestion: The Atiyah-Singer Index Theorem. It’s an important and spectacular piece of work. I’ll explain why I’m not doing that in a few sentences.

Mr Wu pointed out that a biography of Michael Atiyah, one of the authors of this theorem, might be worth doing. GoldenOj endorsed the biography idea, and the more I thought it over the more I liked it. I’m not able to do a true biography, something that goes to primary sources and finds a convincing story of a life. But I can sketch out a bit, exploring his work and why it’s of note.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Michael Atiyah.

Theodore Frankel’s The Geometry of Physics: An Introduction is a wonderful book. It’s 686 pages, including the index. It all explores how our modern understanding of physics is our modern understanding of geometry. On page 465 it offers this:

The Atiyah-Singer index theorem must be considered a high point of geometrical analysis of the twentieth century, but is far too complicated to be considered in this book.

I know when I’m licked. Let me attempt to look at one of the people behind this theorem instead.

The Riemann Hypothesis is about where to find the roots of a particular infinite series. It’s been out there, waiting for a solution, for a century and a half. There are many interesting results which we would know to be true if the Riemann Hypothesis is true. In 2018, Michael Atiyah declared that he had a proof. And, more, an amazing proof, a short proof. Albeit one that depended on a great deal of background work and careful definitions. The mathematical community was skeptical. It still is. But it did not dismiss outright the idea that he had a solution. It was plausible that Atiyah might solve one of the greatest problems of mathematics in something that fits on a few PowerPoint slides.

So think of a person who commands such respect.

His proof of the Riemann Hypothesis, as best I understand, is not generally accepted. For example, it includes the fine structure constant. This comes from physics. It describes how strongly electrons and photons interact. The most compelling (to us) consequence of the Riemann Hypothesis is in how prime numbers are distributed among the integers. It’s hard to think how photons and prime numbers could relate. But, then, if humans had done all of mathematics without noticing geometry, we would know there is something interesting about π. Differential equations, if nothing else, would turn up this number. We happened to discover π in the real world first too. If it were not familiar for so long, would we think there should be any commonality between differential equations and circles?

I do not mean to say Atiyah is right and his critics wrong. I’m no judge of the matter at all. What is interesting is that one could imagine a link between a pure number-theory matter like the Riemann hypothesis and a physical matter like the fine structure constant. It’s not surprising that mathematicians should be interested in physics, or vice-versa. Atiyah’s work was particularly important. Much of his work, from the late 70s through the 80s, was in gauge theory. This subject lies under much of modern quantum mechanics. It’s born of the recognition of symmetries, group operations that you can do on a field, such as the electromagnetic field.

In a sequence of papers Atiyah, with other authors, sorted out particular cases of how magnetic monopoles and instantons behave. Magnetic monopoles may sound familiar, even though no one has ever seen one. These are magnetic points, an isolated north or a south pole without its opposite partner. We can understand well how they would act without worrying about whether they exist. Instantons are more esoteric; I don’t remember encountering the term before starting my reading for this essay. I believe I did, encountering the technique as a way to describe the transitions between one quantum state and another. Perhaps the name failed to stick. I can see where there are few examples you could give an undergraduate physics major. And it turns out that monopoles appear as solutions to some problems involving instantons.

This was, for Atiyah, later work. It arose, in part, from bringing the tools of index theory to nonlinear partial differential equations. This index theory is the thing that got us the Atiyah-Singer Index Theorem too complicated to explain in 686 pages. Index theory, here, studies questions like “what can we know about a differential equation without solving it?” Solving a differential equation would tell us almost everything we’d like to know, yes. But it’s also quite hard. Index theory can tell us useful things like: is there a solution? Is there more than one? How many? And it does this through topological invariants. A topological invariant is a trait like, for example, the number of holes that go through a solid object. These things are indifferent to operations like moving the object, or rotating it, or reflecting it. In the language of group theory, they are invariant under a symmetry.

It’s startling to think a question like “is there a solution to this differential equation” has connections to what we know about shapes. This shows some of the power of recasting problems as geometry questions. From the late 50s through the mid-70s, Atiyah was a key person working in a topic that is about shapes. We know it as K-theory. The “K” from the German Klasse, here. It’s about groups, in the abstract-algebra sense; the things in the groups are themselves classes of isomorphisms. Michael Atiyah and Friedrich Hirzebruch defined this sort of group for a topological space in 1959. And this gave definition to topological K-theory. This is again abstract stuff. Frankel’s book doesn’t even mention it. It explores what we can know about shapes from the tangents to the shapes.

And it leads into cobordism, also called bordism. This is about what you can know about shapes which could be represented as cross-sections of a higher-dimension shape. The iconic, and delightfully named, shape here is the pair of pants. In three dimensions this shape is a simple cartoon of what it’s named. On the one end, it’s a circle. On the other end, it’s two circles. In between, it’s a continuous surface. Imagine the cross-sections, how on separate layers the two circles are closer together. How their shapes distort from a real circle. In one cross-section they come together. They appear as two circles joined at a point. In another, they’re a two-looped figure. In another, a smoother circle. Knowing that Atiyah came from these questions may make his future work seem more motivated.

But how does one come to think of the mathematics of imaginary pants? Many ways. Atiyah’s path came from his first research specialty, which was algebraic geometry. This was his work through much of the 1950s. Algebraic geometry is about the kinds of geometric problems you get from studying algebra problems. Algebra here means the abstract stuff, although it does touch on the algebra from high school. You might, for example, do work on the roots of a polynomial, or a comfortable enough equation like x^2 + y^2 = 1 . Atiyah had started — as an undergraduate — working on projective geometries. This is what one curve looks like projected onto a different surface. This moved into elliptic curves and into particular kinds of transformations on surfaces. And algebraic geometry has proved important in number theory. You might remember that the Wiles-Taylor proof of Fermat’s Last Theorem depended on elliptic curves. Some work on the Riemann hypothesis is built on algebraic topology.

(I would like to trace things farther back. But the public record of Atiyah’s work doesn’t offer hints. I can find amusing notes like his father asserting he knew he’d be a mathematician. He was quite good at changing local currency into foreign currency, making a profit on the deal.)

It’s possible to imagine this clear line in Atiyah’s career, and why his last works might have been on the Riemann hypothesis. That’s too pat an assertion. The more interesting thing is that Atiyah had several recognizable phases and did iconic work in each of them. There is a cliche that mathematicians do their best work before they are 40 years old. And, it happens, Atiyah did earn a Fields Medal, given to mathematicians for the work done before they are 40 years old. But I believe this cliche represents a misreading of biographies. I suspect that first-rate work is done when a well-prepared mind looks fresh at a new problem. A mathematician is likely to have these traits line up early in the career. Grad school demands the deep focus on a particular problem. Getting out of grad school lets one bring this deep knowledge to fresh questions.

It is easy, in a career, to keep studying problems one has already had great success in, for good reason and with good results. It tends not to keep producing revolutionary results. Atiyah was able — by chance or design I can’t tell — to several times venture into a new field. The new field was one that his earlier work prepared him for, yes. But it posed new questions about novel topics. And this creative, well-trained mind focusing on new questions produced great work. And this is one way to be credible when one announces a proof of the Riemann hypothesis.

Here is something I could not find a clear way to fit into this essay. Atiyah recorded some comments about his life for the Web of Stories site. These are biographical and do not get into his mathematics at all. Much of it is about his life as child of British and Lebanese parents and how that affected his schooling. One that stood out to me was about his peers at Manchester Grammar School, several of whom he rated as better students than he was. Being a good student is not tightly related to being a successful academic. Particularly as so much of a career depends on chance, on opportunities happening to be open when one is ready to take them. It would be remarkable if there wre three people of greater talent than Atiyah who happened to be in the same school at the same time. It’s not unthinkable, though, and we may wonder what we can do to give people the chance to do what they are good in. (I admit this assumes that one finds doing what one is good in particularly satisfying or fulfilling.) In looking at any remarkable talent it’s fair to ask how much of their exceptional nature is that they had a chance to excel.

Rjlipton’s thoughts on the possible ABC Conjecture proof

There is this thing called the abc Conjecture. It’s a big question in number theory, which is the part of mathematics where we learn we don’t understand anything about prime numbers. Nearly a decade ago Shinichi Mochizuki announced a proof. It’s been controversial. Most importantly, it’s not been well-understood.

It’s finally getting published in a proper journal. A lot of mathematics work is passed around as PDFs, usually on, these days. It’s good for sharing fresh thoughts. But journal publication usually means that the paper has been reviewed, critically, and approved by people who could tell whether the reasoning is sound. Mochizuki’s paper is somewhere around 500 to 600 pages (I’ve seen different figures), and by every report hard to understand even for number theory proofs. A proof is, more than mathematicians like to admit, really an argument that convinces other mathematicians that, if we wanted to spend the time, we could find a completely rigorous proof. With very long proofs, and very complicated proofs, the standard of being convincing gets tougher.

In this essay, Not As Easy As ABC, rjlipton discusses some of the conjecture, and the problems of Mochizuki’s paper. Not specifically about whether this proof is right, but about the general problem of how we can trust difficult proofs. So you may find that worth the read.

The Playful Math Education Blog Carnival #136

Greetings, friends, and thank you for visiting the 136th installment of Denise Gaskins’s Playful Math Education Blog Carnival. I apologize ahead of time that this will not be the merriest of carnivals. It has not been the merriest of months, even with it hosting Pi Day at the center.

Playful Math Education Blog Carnival banner, showing a coati dressed in bright maroon ringmaster's jacket and top hat, with multiplication and division signs sitting behind atop animal-training podiums; a greyscale photograph audience is in the far background.
Banner art again by Thomas K Dye, creator of Newshounds, Infinity Refugees, Something Happens, and his current comic strip, Projection Edge. You can follow him on Patreon and read his comic strip nine months ahead of its worldwide publication. The banner art was commissioned several weeks ago when I expected I would be in a more playful mood this week.

In consideration of that, let me lead with Art in the Time of Transformation by Paula Beardell Krieg. This is from the blog Playful Bookbinding and Paper Works. The post particularly reflects on the importance of creating a thing in a time of trouble. There is great beauty to find, and make, in symmetries, and rotations, and translations. Simple polygons patterned by simple rules can be accessible to anyone. Studying just how these symmetries and other traits work leads to important mathematics. Thus how Kreig’s page has recent posts with names like “Frieze Symmetry Group F7” but also to how symmetry is for five-year-olds. I am grateful to Goldenoj for the reference.

Kreig’s writing drew the attention of another kind contributor to my harvesting. Symmetry and Multiplying Negative Numbers explores one of those confusing things about negative numbers: how can a negative number times a negative number be positive? One way to understand this is to represent arithmetic operations as geometric operations. Particularly, we can see negation as a reflection.

That link was brought to my attention by Iva Sallay, another longtime friend of my little writings here. She writes fun pieces about every counting number, along with recreational puzzles. And asked to share 1458 Tangrams Can Be A Pot of Gold, as an example of what fascinating things can be found in any number. This includes a tangram. Tangrams we see in recreational-mathematics puzzles based on ways that you can recombine shapes. It’s always exciting to be able to shift between arithmetic and shapes. And that leads to a video and related thread again pointed to me by goldenoj …

This video, by Mathologer on YouTube, explains a bit of number theory. Number theory is the field of asking easy questions about whole numbers, and then learning that the answers are almost impossible to find. I exaggerate, but it does often involve questions that just suppose you understand what a prime number should be. And then, as the title asks, take centuries to prove.

Fermat’s Two-Squares Theorem, discussed here, is not the famous one about a^n + b^2 = c^n . Pierre de Fermat had a lot of theorems, some of which he proved. This one is about prime numbers, though, and particularly prime numbers that are one more than a multiple of four. This means it’s sometimes called Fermat’s 4k+1 Theorem, which is the name I remember learning it under. (k is so often a shorthand for “some counting number” that people don’t bother specifying it, the way we don’t bother to say “x is an unknown number”.) The normal proofs of this we do in the courses that convince people they’re actually not mathematics majors.

What the video offers is a wonderful alternate approach. It turns key parts of the proof into geometry, into visual statements. Into sliding tiles around and noticing patterns. It’s also a great demonstration of one standard problem-solving tool. This is to look at a related, different problem that’s easier to say things about. This leads to what seems like a long path from the original question. But it’s worth it because the path involves thinking out things like “is the count of this thing odd or even”? And that’s mathematics that you can do as soon as you can understand the question.

Iva Sallay also brought up Jenna Laib’s Making Meaning with Arrays: More Preschooler Division which similarly sees numerical truths revealed through geometric reasoning. Here, particularly, by the problem of baking muffins and thinking through how to divide them up. A key piece here, for a particular child’s learning, was being able to pick up and move things around. Often in shifting between arithmetic and geometry we suppose that we can rearrange things without effort. As adults it’s easy to forget that this is an abstraction that we need to learn.

Sharing of food, in this case cookies, appears in Helena Osana’s Mathematical thinking begins in the early years with dialogue and real-world exploration. Mathematic, Osana notes, is primarily about thinking. An important part in mathematics education is working out how the thinking children most like to do can also find mathematics.

I again thank Iva Sallay for that link, as well as this essay. Dan Meyer’s But Artichokes Aren’t Pinecones: What Do You Do With Wrong Answers? looks at the problem of students giving wrong answers. There is no avoiding giving wrong answers. A parent’s or teacher’s response to wrong answers will vary, though, and Meyer asks why that is. Meyer has some hypotheses. His example notes that he doesn’t mind a child misidentifying an artichoke as a pinecone. Not in the same way identifying the sum of 1 and 9 as 30 would. What is different about those mistakes?

Jessannwa’s Soft Start In The Intermediate Classroom looks to the teaching of older students. No muffins and cookies here. That the students might be more advanced doesn’t change the need to think of what they have energy for, and interest in. She discusses a class setup that’s meant to provide structure in ways that don’t feel so authority-driven. And ways to turn practicing mathematics problems into optimizing game play. I will admit this is a translation of the problem which would have worked well for me. But I also know that not everybody sees a game as, in part, something to play at maximum efficiency. It depends on the game, though. They’re on Twitter as @jesannwa.

Speaking of the game, David Coffey’s Creating Positive Change in Math Class was written in anticipation of the standardized tests meant to prove out mathematics education. Coffey gets to thinking about how to frame teaching to more focus on why students should have a skill, and how they can develop it. How to get students to feel involved in their work. Even how to get students to do homework more reliably. Coffey’s scheduled to present at the Michigan Council of Teachers of Mathematics conference in Grand Rapids this July. This if all starts going well. And this is another post I know of thanks to Goldenoj.

These are thoughts about how anyone can start learning mathematics. What does it look like to have learned a great deal, though, to the point of becoming renowned for it? Life Through A Mathematician’s Eyes posted Australian Mathematicians in late January. It’s a dozen biographical sketches of Australian mathematicians. It also matches each to charities or other public-works organizations. They were trying to help the continent through the troubles it had even before the pandemic struck. They’re in no less need for all that we’re exhausted. The page’s author is on Twitter as @lthmath.

Mathematical study starts small, though. Often it starts with games. There are many good ones, not least Iva Sallay’s Find the Factors puzzles.

Besides that, Dads Worksheets has provided a set of Math Word Search Puzzles. It’s a new series from people who create worksheets for many grade levels and many aspects of mathematics. They’re on Twitter as @dadsworksheets.

Mr Wu, of the Singapore Math Tuition blog, has also begun a new series of recreational mathematics puzzles. He lays out the plans for this, puzzles aimed at children around eight to ten years old. One of the early ones is the Stickers Math Question. A more recent one is The Secret of the Sweets (Sweet Distribution Problem). Mr Wu can be found on Twitter as @mathtuition88.

Denise Gaskins, on Twitter as @letsplaymath, and indefatigable coordinator for this carnival, offers the chance to Play Math with Your Kids for Free. This is an e-book sampler of mathematics gameplay.

I have since the start of this post avoided mentioning the big mathematical holiday of March. Pi Day had the bad luck to fall on a weekend this year, and then was further hit by the Covid-19 pandemic forcing the shutdown of many schools. Iva Sallay again helped me by noting YummyMath’s activities page It’s Time To Gear Up For Pi Day. This hosts several worksheets, about the history of π and ways to calculate it, and several formulas for π. This even gets into interesting techniques like how to use continued fractions in finding a numerical value.

The Guys and Good Health blog presented Happy Pi Day on the 14th, with — in a move meant to endear the blog to me — several comic strips. This includes one from Grant Snider, who draws lovely strips. I’m sad that his Incidental Comics has left, so I can’t feature it often during my Reading the Comics roundups anymore.

Virtual Brush Box, meanwhile, offers To Celebrate Pi Day, 10 Examples of Numbers and 10 Examples of Math Involved with Horses which delights me by looking at π, and mathematics, as they’re useful in horse-related activities. This may be the only blog post written specifically for me and my sister, and I am so happy that there is the one.

There’s a bit more, a bit of delight. It was my greatest surprise in looking for posts for this month. That is poetry. I mean this literally.

Whimsy-Mimsy wrote on Pi Day a haiku.

D Avery, on Shift N Shake, wrote the longer Another Slice of Pi Day, the third year of their composing poems observing the day.

Rolands Rag Bag shared A Pi-Ku for Pi-Day featuring a poem written in a form I wasn’t aware anyone did. The “Pi-Ku” as named here has 3 syllables for the first time, 1 syllable in the second line, 4 syllables in the third line, 1 syllable the next line, 5 syllables after that … you see the pattern. (One of Avery’s older poems also keeps this form.) The form could, I suppose, go on to as many lines as one likes. Or at least to the 40th line, when we would need a line of zero syllables. Probably one would make up a rule to cover that.

Blind On The Light Side similarly wrote Pi poems, including a Pi-Ku, for March 12, 2020. These poems don’t reach long enough to deal with the zero-syllable line, but we can forgive someone not wanting to go on that long.

As a last note, I have joined Mathstodon, the Mastodon instance with a mathematics theme. You can follow my shy writings there as, or follow a modest number of people talking, largely, about mathematics. Mathstodon is a mathematically-themed microblogging site. On WordPress, I do figure to keep reading the comics for their mathematics topics. And sometime this year, when I feel I have the energy, I hope to do another A to Z, my little glossary project.

And this is what I have to offer. I hope the carnival has brought you some things of interest, and some things of delight. And, if I may, please consider this Grant Snider cartoon, Hope.

Life Through A Mathematician’s Eyes is scheduled to host the 137th installment of the Playful Math Education Blog Carnival, at the end of April. I look forward to seeing it. Good luck to us all.

A Weird Kind Of Ruler

I ran across something neat. It’s something I’ve seen before, but the new element is that I have a name for it. This is the Golomb Ruler. It’s a ruler made with as few marks as possible. The marks are supposed to be arranged so that the greatest possible number of different distances can be made, by measuring between selected pairs of points.

So, like, in a regularly spaced ruler, you have a lot of ways to measure a distance of 1 unit of length. Only one fewer way to measure a distance of 2 units. One fewer still ways to measure a distance of 3 units and so on. Convenient but wasteful of marks. A Golomb ruler might, say, put marks only where the regularly spaced ruler has the units 1, 2, and 4. Then by choosing the correct pairs you can measure a distance of 1, 2, 3, or 4 units.

There’s applications of the Golomb ruler, stuff in information theory and sensor design and stuff. Also logistics. Never mind those. They present a neat little puzzle: can you find, for a given number of marks, the best possible arrangement of them into a ruler? That would be the arrangement that allows the greatest number of different lengths. Or perhaps the one that allows the longest string of whole-number differences. Your definition of best-possible determines what the answer is.

As a number theory problem it won’t surprise you to know there’s not a general answer. If I’m reading accurately most of the known best arrangements — the ones that allow the greatest number of differences — were proven by testing out cases. The 24-mark arrangement needed a test of 555,529,785,505,835,800 different rulers. MathWorld’s page on this tells me that optimal mark placement isn’t known for 25 or more marks. It also says that the 25-mark ruler’s optimal arrangement was published in 2008. So it isn’t just Wikipedia where someone will write an article, and then someone else will throw a new heap of words onto it, and nobody will read to see if the whole thing still makes sense. Wikipedia meanwhile lists optimal configurations for up to 27 points, demonstrated by 2014.

And as this suggests, you aren’t going to discover an optimal arrangement for some number of marks yourself. Unless you should be the first person to figure out an algorithm to do it. It’s not even known how complex an algorithm has to be. It’s suspected that it has to be NP-hard, though. But, while you won’t discover anything new to mathematics in pondering this, you can still have the fun of working out arrangements yourself, at least for a handful of points. There are numbers of points with more than one optimal arrangement.

(Golomb here is Solomon W Golomb, a mathematician and electrical engineer with a long history in information theory and also recreational mathematics problems. There are several parties who independently invented the problem. But Golomb actually did work with rulers, so at least they aren’t incorrectly named.)

My 2019 Mathematics A To Z: Relatively Prime

I have another subject nominated by goldenoj today. And it even lets me get into number theory, the field of mathematics questions that everybody understands and nobody can prove.

Cartoony banner illustration of a coati, a raccoon-like animal, flying a kite in the clear autumn sky. A skywriting plane has written 'MATHEMATIC A TO Z'; the kite, with the letter 'S' on it to make the word 'MATHEMATICS'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Relatively Prime.

I was once a young grad student working as a teaching assistant and unaware of the principles of student privacy. Near the end of semesters I would e-mail students their grades. This so they could correct any mistakes and know what they’d have to get on the finals. I was learning Perl, which was an acceptable pastime in the 1990s. So I wrote scripts that would take my spreadsheet of grades and turn it into e-mails that were automatically sent. And then I got all fancy.

It seemed boring to send out completely identical form letters, even if any individual would see it once. Maybe twice if they got me for another class. So I started writing variants of the boilerplate sentences. My goal was that every student would get a mass-produced yet unique e-mail. To best the chances of this I had to make sure of something about all these variant sentences and paragraphs.

So you see the trick. I needed a set of relatively prime numbers. That way, it would be the greatest possible number of students before I had a completely repeated text. We know what prime numbers are. They’re the numbers that, in your field, have exactly two factors. In the counting numbers the primes are numbers like 2, 3, 5, 7 and so on. In the Gaussian integers, these are numbers like 3 and 7 and 3 - 2\imath . But not 2 or 5. We can look to primes among the polynomials. Among polynomials with rational coefficients, x^2 + x + 1 is prime. So is 2x^2 + 14x + 1 . x^2 - 4 is not.

The idea of relative primes appears wherever primes appears. We can say without contradiction that 4 and 9 are relative primes, among the whole numbers. Though neither’s prime, in the whole numbers, neither has a prime factor in common. This is an obvious way to look at it. We can use that definition for any field that has a concept of primes. There are others, though. We can say two things are relatively prime if there’s a linear combination of them that adds to the identity element. You get a linear combination by multiplying each of the things by a scalar and adding these together. Multiply 4 by -2 and 9 by 1 and add them and look what you get. Or, if the least common multiple of a set of elements is equal to their product, then the elements are relatively prime. Some make sense only for the whole numbers. Imagine the first quadrant of a plane, marked in Cartesian coordinates. Draw the line segment connecting the point at (0, 0) and the point with coordinates (m, n). If that line segment touches no dots between (0, 0) and (m, n), then the whole numbers m and n are relatively prime.

We start looking at relative primes as pairs of things. We can be interested in larger sets of relative primes, though. My little e-mail generator, for example, wouldn’t work so well if any pair of sentence replacements were not relatively prime. So, like, the set of numbers 2, 6, 9 is relatively prime; all three numbers share no prime factors. But neither the pair 2, 6 and the pair 6, 9 are not relatively prime. 2, 9 is, at least there’s that. I forget how many replaceable sentences were in my form e-mails. I’m sure I did the cowardly thing, coming up with a prime number of alternate ways to phrase as many sentences as possible. As an undergraduate I covered the student government for four years’ worth of meetings. I learned a lot of ways to say the same thing.

Which is all right, but are relative primes important? Relative primes turn up all over the place in number theory, and in corners of group theory. There are some thing that are easier to calculate in modulo arithmetic if we have relatively prime numbers to work with. I know when I see modulo arithmetic I expect encryption schemes to follow close behind. Here I admit I’m ignorant whether these imply things which make encryption schemes easier or harder.

Some of the results are neat, certainly. Suppose that the function f is a polynomial. Then, if its first derivative f’ is relatively prime to f, it turns out f has no repeated roots. And vice-versa: if f has no repeated roots, then it and its first derivative are relatively prime. You remember repeated roots. They’re factors like (x - 2)^2 , that foiled your attempt to test a couple points and figure roughly where a polynomial crossed the x-axis.

I mentioned that primeness depends on the field. This is true of relative primeness. Polynomials really show this off. (Here I’m using an example explained in a 2007 Ask Dr Math essay.) Is the polynomial 3x + 6 relatively prime to 3x^2 + 12 ?

It is, if we are interested in polynomials with integer coefficients. There’s no linear combination of 3x + 6 and 3x^2 + 12 which gets us to 1. Go ahead and try.

It is not, if we are interested in polynomials with rational coefficients. Multiply 3x + 6 by \frac{1}{12}\left(1 - \frac{1}{2}x\right) and multiply 3x^2 + 12 by \frac{1}{24} . Then add those up.

Tell me what polynomials you want to deal with today and I will tell you which answer is right.

This may all seem cute if, perhaps, petty. A bunch of anonymous theorems dotting the center third of an abstract algebra text will inspire that. The most important relative-primes thing I know of is the abc conjecture, posed in the mid-80s by Joseph Oesterlé and David Masser. Start with three counting numbers, a, b, and c. Require that a + b = c.

There is a product of the unique prime factors of a, b, and c. That is, let’s say a is 36. This is 2 times 2 times 3 times 3. Let’s say b is 5. This is prime. c is 41; it’s prime. Their unique prime factors are 2, 3, 5, and 41; the product of all these is 1,230.

The conjecture deals with this product of unique prime factors for this relatively prime triplet. Almost always, c is going to be smaller than this unique prime factors product. The conjecture says that there will be, for every positive real number \epsilon , at most finitely many cases where c is larger than this product raised to the power 1 + \epsilon . I do not know why raising this product to this power is so important. I assume it rules out some case where this product raised to the first power would be too easy a condition.

Apart from that 1 + \epsilon bit, though, this is a classic sort of number theory conjecture. Like, it involves some technical terms, but nothing too involved. You could almost explain it at a party and expect to be understood, and to get some people writing down numbers, testing out specific cases. Nobody will go away solving the problem, but they’ll have some good exercise and that’s worthwhile.

And it has consequences. We do not know whether the abc conjecture is true. We do know that if it is true, then a bunch of other things follow. The one that a non-mathematician would appreciate is that Fermat’s Last Theorem would be provable by an alterante route. The abc conjecture would only prove the cases for Fermat’s Last Theorem for powers greater than 5. But that’s all right. We can separately work out the cases for the third, fourth, and fifth powers, and then cover everything else at once. (That we know Fermat’s Last Theorem is true doesn’t let us conclude the abc conjecture is true, unfortunately.)

There are other implications. Some are about problems that seem like fun to play with. If the abc conjecture is true, then for every integer A, there are finitely many values of n for which n! + A is a perfect square. Some are of specialist interest: Lang’s conjecture, about elliptic curves, would be true. This is a lower bound for the height of non-torsion rational points. I’d stick to the n! + A stuff at a party. A host of conjectures about Diophantine equations — (high school) algebra problems where only integers may be solutions — become theorems. Also coming true: the Fermat-Catalan conjecture. This is a neat problem; it claims that the equation

a^m + b^n = c^k

where a, b, and c are relatively prime, and m, n, and k are positive integers satisfying the constraint

\frac{1}{m} + \frac{1}{n} + \frac{1}{k} < 1

has only finitely many solutions with distinct triplets \left(a^m, b^n, c^k\right) . The inequality about reciprocals of m, n, and k is needed so we don’t have boring solutions like 2^2 + 3^3 = 31^1 clogging us up. The bit about distinct triplets is so we don’t clog things up with a or b being 1 and then technically every possible m or n giving us a “different” set. To date we know something like ten solutions, one of them having a equal to 1.

Another implication is Pillai’s Conjecture. This one asks whether every positive integer occurs only finitely many times as the difference between perfect powers. Perfect powers are, like 32 (two to the fifth power) or 81 (three to the fourth power) or such.

So as often happens when we stumble into a number theory thing, the idea of relative primes is easy. And there are deep implications to them. But those in turn give us things that seem like fun arithmetic puzzles.

This closes out the A to Z essays for this week. Tomorrow and Saturday I hope to bring some attention to essays from past years. And next week I figure to open for topics for the end of the alphabet, the promising letters U through Z. This and the rest of the 2019 essays should appear at this link, as should the letter S next Tuesday. And all of the A to Z essays ought to be at this link. Thank you for reading.

Are These Numbers A Thing?

A friend recently had a birthday. His fun way of mentioning his age was to put it as a story problem, as in this tweet.

He refined the question eventually. His age was now a palindrome, but it had just been a prime number. I came back with the obvious answer: he’s 268,862 years old.

And he was curious how I came up with that. Specifically how I ended up in the six-digit range. The next “age” after 44 would be 212. Jumping to a number that’s outside the range of plausible human ages is the obvious joke. How I got to six digits is less obvious. But it seemed to me that if three digits is funny, then four digits must be funnier. The four-digit palindromes-succeeding-a-prime seemed boring. Five digits it is, then. Oh, but then there’s the thousands comma and that’s not in the middle of the number. Six digits it is. This gives you insight into why I am a humor blogger and not a successful humor blogger.

I didn’t go looking numbers myself, by the way. I had Octave, this open-source clone of Matlab, tell me whether numbers were prime. I just had to think of palindromic numbers.

There are a couple of plausible human ages that are palindromes-succeeding-a-prime. At least if you accept a one-digit number as a palindrome. (I don’t see a reason not to.) Those would be 3, 4, 6, 8, and 44. After that we get into numbers that humans are not likely to reach, such as 212 and 242 and 252 and 272 and 282. Then nothing until 434, 444, 464, and so on. Certainly nothing in the 500’s.

So that’s got me wondering two things, and they’re open questions. The first is, is this sequence a thing? That is, has anybody done any kind of study about palindromes-after-a-prime? I’m not saying that this is an important sequence. This is a sequence that you look at and say, “Huh” about. But there are a lot of sequences that you can mostly just say “Huh” about. That doesn’t mean we don’t know anything about them. I checked in the Online Encyclopedia of Integer Sequences, and found nothing. But I’m not confident in my searching ability.

The second thing is what anyone studying this sequence would first like to know. Is this an infinitely long sequence? Or is there a largest palindrome-succeeding-a-prime? My instinct is to say there’s not a largest. There are infinitely many prime numbers. There are infinitely many palindromic numbers. Surely the coincidence that a prime is followed by a palindrome happens infinitely often. That is purely a guess, however.

There could be and end to this. Consider truncatable primes, a prime number which (in base ten) is still prime if you truncate any string of its leftmost or its rightmost digits. That is, like, chop either any number of digits off the left end, or off the right end, of 3797, and you still have a prime number. There are only finitely many primes that let you do this. Specifically there are 15 (base-ten) primes that let you chop off either the left or the right side and still have a prime.

Bizarrely, if you allow only chopping off digits on the left side, there are 4,260 left-truncatable primes. If you allow only chopping off digits on the right side, there are 83. If you chop off alternating digits, one from the left side and then one from the right, then there are 920,720,315 such primes. This hasn’t anything to do with my friend’s numbers. They’re just an example that this sort of sequence can Peter out, and in unpredictable ways. Wouldn’t you have guessed there to be about as many left-truncatable as right-truncatable numbers? And fewer alternating-truncatable numbers than either? … Well, I would have, anyway.

So I don’t know. And I know that number theory problems like this have a habit of being either solvable by a tiny bit of cleverness or being basically impossible to do. No idea which this is, but someone out there might enjoy passing a dull meeting by doodling integers.

In Which I Probably Just Make Myself A Problem That Can’t Be Solved

I got to thinking about unit fractions. This is in the service of a different problem I might get around to talking about. Unit fractions are fractions, yeah. They allow the denominator to be any counting number. The numerator is always 1, that is, the unit. So, like, \frac12 , \frac18 , \frac{1}{432950} , these are all unit fractions.

Particularly I was thinking about sums of unit fractions. And whether the sum of a particular group of unit fractions is less than or greater than one. Like, \frac12 + \frac13 + \frac14 is greater than one, sure. But \frac13 + \frac14 + \frac15 is less than one. But is \frac13 + \frac14 + \frac16 + \frac17 ? Is there an easy way to tell? I mean easier than addition, which is admittedly pretty simple to start with. Might be fun to spot a straightforward way to do this.

Where my joy in this fun little problem disappears is realizing, oh, of course, this is a number theory problem. Number theory is about studying how numbers work and what they do. It’s full of great questions you can understand even if you don’t know much mathematics. Like, is there a largest prime number? Is every even number larger than two equal to a sum of two prime numbers? “Can you tell at a glance whether a set of unit fractions adds up to more than 1” fits in that line. (A mathematician might clean that up by saying “can you tell by inspection”, but still.)

The trouble with number theory problems is they pretty much break one of two ways. One is that we have an answer and can prove it. The proof is this 12-line thread of argument so tight you can cut yourself on the reducto-ad-absurdum. Allow a couple symbols and you could fit the thing in two tweets. The other way a number theory problem can break is “well, after 120 years of study, we have a 60-page proof that seems to answer a specialized case of this problem, if we assume that the Riemann hypothesis is true”. Or possibly “if we assume the continuum hypothesis is [ true or false ]”. Anyway, there are people who have some doubts about the section between pages 38 and 44.

I haven’t poked around the literature, not even Wikipedia, yet. So I don’t know which kind this is more likely to be. My suspicion is there’s probably some neat 12-line proofs. Unit fractions are terms in the “harmonic series”. This is the number you get by adding together \frac11 + \frac12 + \frac13 + \frac14 + \frac15 + \frac16 + \cdots , carrying on with the denominator going through every whole number ever. This series turns out to “diverge”. You go ahead and pick any number you like. I can then pick a finite set of terms from the harmonic series. And my set of terms will add up to something bigger than your number.

And yet other weird stuff happens. Like, pick any string of digits you like. I’ll say ’35’ because I’m writing this sentence at 35 minutes past the hour. Keep the whole harmonic series except for any terms which have the sequence ’35’ in them. So, no \frac{1}{35} , no \frac{1}{358} , no \frac{1}{835} , no \frac{1}{8358} . Although \frac{1}{3858} is still in. Add up the infinitely many terms that remain. That will “converge’, adding up to some finite number.

So you see I’m looking at a problem that’s in well-explored waters. This makes me also suspect there isn’t a better answer than “just add your fractions up”. If there were, it would probably be a mildly well-known trick used for arithmetic magic tricks. Or, possibly, as an odd trick used to squeeze some other proof down to 12 lines.

My 2018 Mathematics A To Z: Pigeonhole Principle

Today’s topic is another that Dina Yagodich offered me. She keeps up a YouTube channel, with a variety of educational videos you might enjoy. And I apologize to Roy Kassinger, but I might come back around to “parasitic numbers” if I feel like doing some supplements or side terms.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Pigeonhole Principle.

Mathematics has a reputation for obscurity. It’s full of jargon. All these weird, technical terms. Properties that mathematicians take to be obvious but that normal people find baffling. “The only word I understood was `the’,” is the feedback mathematicians get when they show their family their thesis. I’m happy to share one that is not. This is one of those principles that anyone can understand. It’s so accessible that people might ask how it’s even mathematics.

The Pigeonhole Principle is usually put something like this. If you have more pigeons than there are different holes to nest them in, then at least one pigeonhole has to hold more than one pigeon. This is if the speaker wishes to keep pigeons in the discussion and is assuming there’s a positive number of both pigeons and holes. Tying a mathematical principle to something specific seems odd. We don’t talk about addition as apples put together or divided among friends. Not after elementary school anyway. Not once we’ve trained our natural number sense to work with abstractions.

One pigeon, as photographed by me in Pittsburgh in summer 2017. Not depicted: pigeonholes.
A pigeon, on a Pittsburgh sidewalk in summer 2017, giving me a good looking-over.

If we want to make it abstract we can. Put it as “if you have more objects to put in boxes then you have boxes, then at least one box must hold more than one object”. In this form it is known as the Dirichlet Box Principle. Dirichlet here is Johan Peter Gustav Lejeune Dirichlet. He’s one of the seemingly infinite number of great 19th-Century French-German mathematicians. His family name was “Lejeune Dirichlet”, so his surname is an example of his own box principle. Everyone speaks of him as just Dirichlet, though. And they speak of him a lot, for stuff in mathematical physics, in thermodynamics, in Fourier transforms, in number theory (he proved two specific cases of Fermat’s Last Theorem), and in probability.

Still, at least in my experience, it’s “pigeonhole principle”. I don’t know why pigeons. It would be as good a metaphor to speak of horses put in stalls, or letters put in mailboxes, or pairs of socks put in hotel drawers. Perhaps it’s a reflection of the long history of breeding pigeons. That they’re familiar, likable animals, despite invective. That a bird in a cubby-hole seems like a cozy, pleasant image.

The pigeonhole principle is one of those neat little utility theorems. I think of it as something handy for existence proofs. These are proofs where you show there must be a thing. They don’t typically tell you what the thing is, or even help you to find it. They promise there is something to find.

Some of its uses seem too boring to bother proving. Pick five cards from a standard deck of cards; at least two will be the same suit. There are at least two non-bald people in Philadelphia who have the same number of hairs on their heads. Some of these uses seem interesting enough to prove, but worth nothing more than a shrug and a huh. Any 27-word sequence in the Constitution of the United States includes at least two words that begin with the same letter. Also at least two words that end with the same letter. If you pick any five integers from 1 to 8 (inclusive), then at least two of them will sum to nine.

Some uses start feeling unsettling. Draw five dots on the surface of an orange. It’s always possible to cut the orange in half in such a way that four points are on the same half. (This supposes that a point on the cut counts as being on both halves.)

Pick a set of 100 different whole numbers. It is always possible to select fifteen of these numbers, so that the difference between any pair of these select fifteen is some whole multiple of 7.

Select six people. There is always a triad of three people who all know one another, or who are all strangers to one another. (This supposes that “knowing one another” is symmetric. Real world relationships are messier than this. I have met Roger Penrose. There is not the slightest chance he is aware of this. Do we count as knowing one another or not?)

Some seem to transcend what we could possibly know. Drop ten points anywhere along a circle of diameter 5. Then we can conclude there are at least two points a distance of less than 2 from one another.

Drop ten points into an equilateral triangle whose sides are all length 1. Then there must be at least two points that are no more than distance \frac{1}{3} apart.

Start with any lossless data compression algorithm. Your friend with the opinions about something called “Ubuntu Linux” can give you one. There must be at least one data set it cannot compress. Your friend is angry about this fact.

Take a line of length L. Drop on it some number of points n + 1. There is some shortest length between consecutive points. What is the largest possible shortest-length-between-points? It is the distance \frac{L}{n} .

As I say, this won’t help you find the examples. You need to check the points in your triangle to see which ones are close to one another. You need to try out possible sets of your 100 numbers to find the ones that are all multiples of seven apart. But you have the assurance that the search will end in success, which is no small thing. And many of the conclusions you can draw are delights: results unexpected and surprising and wonderful. It’s great mathematics.

A note on sources. I drew pigeonhole-principle uses from several places. John Allen Paulos’s Innumeracy. Paulos is another I know without their knowing me. But also 16 Fun Applications of the Pigeonhole Principle, by Presh Talwalkar.’s Pigeonhole Principle, by Lawrence Chiou, Parth Lohomi, Andrew Ellinor, et al. The Art of Problem Solving’s Pigeonhole Principle, author not made clear on the page. If you’re stumped by how to prove one or more of these claims, and don’t feel like talking them out here, try these pages.

My next Fall 2018 Mathematics A-To-Z post should be Tuesday. It’ll be available at this link, as are the rest of these glossary posts.

My 2018 Mathematics A To Z: Fermat’s Last Theorem

Today’s topic is another request, this one from a Dina. I’m not sure if this is Dina Yagodich, who’d also suggested using the letter ‘e’ for the number ‘e’. Trusting that it is, Dina Yagodich has a YouTube channel of mathematics videos. They cover topics like how to convert degrees and radians to one another, what the chance of a false positive (or false negative) on a medical test is, ways to solve differential equations, and how to use computer tools like MathXL, TI-83/84 calculators, or Matlab. If I’m mistaken, original-commenter Dina, please let me know and let me know if you have any creative projects that should be mentioned here.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Fermat’s Last Theorem.

It comes to us from number theory. Like many great problems in number theory, it’s easy to understand. If you’ve heard of the Pythagorean Theorem you know, at least, there are triplets of whole numbers so that the first number squared plus the second number squared equals the third number squared. It’s easy to wonder about generalizing. Are there quartets of numbers, so the squares of the first three add up to the square of the fourth? Quintuplets? Sextuplets? … Oh, yes. That’s easy. What about triplets of whole numbers, including negative numbers? Yeah, and that turns out to be boring. Triplets of rational numbers? Turns out to be the same as triplets of whole numbers. Triplets of real-valued numbers? Turns out to be very boring. Triplets of complex-valued numbers? Also none too interesting.

Ah, but, what about a triplet of numbers, only raised to some other power? All three numbers raised to the first power is easy; we call that addition. To the third power, though? … The fourth? Any other whole number power? That’s hard. It’s hard finding, for any given power, a trio of numbers that work, although some come close. I’m informed there was an episode of The Simpsons which included, as a joke, the equation 1782^{12} + 1841^{12} = 1922^{12} . If it were true, this would be enough to show Fermat’s Last Theorem was false. … Which happens. Sometimes, mathematicians believe they have found something which turns out to be wrong. Often this comes from noticing a pattern, and finding a proof for a specific case, and supposing the pattern holds up. This equation isn’t true, but it is correct for the first nine digits. An episode of The Wizard of Evergreen Terrace puts forth 3987^{12} + 4365^{12} = 4472^{12} , which apparently matches ten digits. This includes the final digit, also known as “the only one anybody could check”. (The last digit of 398712 is 1. Last digit of 436512 is 5. Last digit of 447212 is 6, and there you go.) Really makes you think there’s something weird going on with 12th powers.

For a Fermat-like example, Leonhard Euler conjectured a thing about “Sums of Like Powers”. That for a whole number ‘n’, you need at least n whole numbers-raised-to-an-nth-power to equal something else raised to an n-th power. That is, you need at least three whole numbers raised to the third power to equal some other whole number raised to the third power. At least four whole numbers raised to the fourth power to equal something raised to the fourth power. At least five whole numbers raised to the fifth power to equal some number raised to the fifth power. Euler was wrong, in this case. L J Lander and T R Parkin published, in 1966, the one-paragraph paper Counterexample to Euler’s Conjecture on Sums of Like Powers. 27^5 + 84^5 + 110^5 + 133^5 = 144^5 and there we go. Thanks, CDC 6600 computer!

But Fermat’s hypothesis. Let me put it in symbols. It’s easier than giving everything long, descriptive names. Suppose that the power ‘n’ is a whole number greater than 2. Then there are no three counting numbers ‘a’, ‘b’, and ‘c’ which make true the equation a^n + b^n = c^n . It looks doable. It looks like once you’ve mastered high school algebra you could do it. Heck, it looks like if you know the proof about how the square root of two is irrational you could approach it. Pierre de Fermat himself said he had a wonderful little proof of it.

He was wrong. No shame in that. He was right about a lot of mathematics, including a lot of stuff that leads into the basics of calculus. And he was right in his feeling that this a^n + b^n = c^n stuff was impossible. He was wrong that he had a proof. At least not one that worked for every possible whole number ‘n’ larger than 2.

For specific values of ‘n’, though? Oh yes, that’s doable. Fermat did it himself for an ‘n’ of 4. Euler, a century later, filed in ‘n’ of 3. Peter Dirichlet, a great name in number theory and analysis, and Joseph-Louis Lagrange, who worked on everything, proved the case of ‘n’ of 5. Dirichlet, in 1832, proved the case for ‘n’ of 14. And there were more partial solutions. You could show that if Fermat’s Last Theorem were ever false, it would have to be false for some prime-number value of ‘n’. That’s great work, answering as it does infinitely many possible cases. It just leaves … infinitely many to go.

And that’s how things went for centuries. I don’t know that every mathematician made some attempt on Fermat’s Last Theorem. But it seems hard to imagine a person could love mathematics enough to spend their lives doing it and not at least take an attempt at it. Nobody ever found it, though. In a 1989 episode of Star Trek: The Next Generation, Captain Picard muses on how eight centuries after Fermat nobody’s proven his theorem. This struck me at the time as too pessimistic. Granted humans were stumped for 400 years. But for 800 years? And stumping everyone in a whole Federation of a thousand worlds? And more than a thousand mathematical traditions? And, for some of these species, tens of thousands of years of recorded history? … Still, there wasn’t much sign of the solving the problem. In 1992 Analog Science Fiction Magazine published a funny short-short story by Ian Randal Strock, “Fermat’s Legacy”. In it, Fermat — jealous of figures like René Descartes and Blaise Pascal who upstaged his mathematical accomplishments — jots down the note. He figures an unsupported claim like that will earn true lasting fame.

So that takes us to 1993, when the world heard about elliptic integrals for the first time. Elliptic curves are neat things. They’re polynomials. They have some nice mathematical properties. People first noticed them in studying how long arcs of ellipses are. (This is why they’re called elliptic curves, even though most of them have nothing to do with any ellipse you’d ever tolerate in your presence.) They look ready to use for encryption. And in 1985, Gerhard Frey noticed something. Suppose you did have, for some ‘n’ bigger than 2, a solution a^n + b^n = c^n . Then you could use that a, b, and n to make a new elliptic curve. That curve is the one that satisfies y^2 = x\cdot\left(x - a^n\right)\cdot\left(x + b^n\right) . And then that elliptic curve would not be “modular”.

I would like to tell you what it means for an elliptic curve to be modular. But getting to that point would take at least four subsidiary essays. MathWorld has a description of what it means to be modular, and even links to explaining terms like “meromorphic”. It’s getting exotic stuff.

Frey didn’t show whether elliptic curves of this time had to be modular or not. This is normal enough, for mathematicians. You want to find things which are true and interesting. This includes conjectures like this, that if elliptic curves are all modular then Fermat’s Last Theorem has to be true. Frey was working on consequences of the Taniyama-Shimura Conjecture, itself three decades old at that point. Yutaka Taniyama and Goro Shimura had found there seemed to be a link between elliptic curves and these “modular forms”, which are a kind of group. That is, a group-theory thing.

So in fall of 1993 I was taking an advanced, though still undergraduate, course in (not-high-school) algebra at Rutgers. It’s where we learn group theory, after Intro to Algebra introduced us to group theory. Some exciting news came out. This fellow named Andrew Wiles at Princeton had shown an impressive bunch of things. Most important, that the Taniyama-Shimura Conjecture was true for semistable elliptic curves. This includes the kind of elliptic curve Frey made out of solutions to Fermat’s Last Theorem. So the curves based on solutions to Fermat’s Last Theorem would have be modular. But Frey had shown any curves based on solutions to Fermat’s Last Theorem couldn’t be modular. The conclusion: there can’t be any solutions to Fermat’s Last Theorem. Our professor did his best to explain the proof to us. Abstract Algebra was the undergraduate course closest to the stuff Wiles was working on. It wasn’t very close. When you’re still trying to work out what it means for something to be an ideal it’s hard to even follow the setup of the problem. The proof itself was inaccessible.

Which is all right. Wiles’s original proof had some flaws. At least this mathematics major shrugged when that news came down and wondered, well, maybe it’ll be fixed someday. Maybe not. I remembered how exciting cold fusion was for about six weeks, too. But this someday didn’t take long. Wiles, with Richard Taylor, revised the proof and published about a year later. So far as I’m aware, nobody has any serious qualms about the proof.

So does knowing Fermat’s Last Theorem get us anything interesting? … And here is a sad anticlimax. It’s neat to know that a^n + b^n = c^n can’t be true unless ‘n’ is 1 or 2, at least for positive whole numbers. But I’m not aware of any neat results that follow from that, or that would follow if it were untrue. There are results that follow from the Taniyama-Shimura Conjecture that are interesting, according to people who know them and don’t seem to be fibbing me. But Fermat’s Last Theorem turns out to be a cute little aside.

Which is not to say studying it was foolish. This easy-to-understand, hard-to-solve problem certainly attracted talented minds to think about mathematics. Mathematicians found interesting stuff in trying to solve it. Some of it might be slight. I learned that in a Pythagorean triplet — ‘a’, ‘b’, and ‘c’ with a^2 + b^2 = c^2 — that I was not the infinitely brilliant mathematician at age fifteen I hoped I might be. Also that if ‘a’, ‘b’, and ‘c’ are relatively prime, you can’t have ‘a’ and ‘b’ both odd and ‘c’ even. You had to have ‘c’ and either ‘a’ or ‘b’ odd, with the other number even. Other mathematicians of more nearly infinite ability found stuff of greater import. Ernst Eduard Kummer in the 19th century developed ideals. These are an important piece of group theory. He was busy proving special cases of Fermat’s Last Theorem.

Kind viewers have tried to retcon Picard’s statement about Fermat’s Last Theorem. They say Picard was really searching for the proof Fermat had, or believed he had. Something using the mathematical techniques available to the early 17th century. Or that follow closely enough from that. The Taniyama-Shimura Conjecture definitely isn’t it. I don’t buy the retcon, but I’m willing to play along for the sake of not causing trouble. I suspect there’s not a proof of the general case that uses anything Fermat could have recognized, or thought he had. That’s all right. The search for a thing can be useful even if the thing doesn’t exist.

Some More Mathematics I’ve Been Reading, 6 October 2018

I have a couple links I’d not included in the recent Playful Mathematics Education Blog Carnival. Looking at them, I can’t say why.

The top page of this asks, with animated text, whether you want to see something amazing. Forgive its animated text. It does do something amazing. This paper by Javier Cilleruelo, Florian Luca, and Lewis Baxter proves that every positive whole number is the sum of at most three palindromic numbers. The web site, by Mathstodon host Christian Lawson-Perfect, demonstrates it. Enter a number and watch the palindromes appear and add up.

Next bit is an article that relates to my years-long odd interest in pasta making. Mathematicians solve age-old spaghetti mystery reports a group of researchers at MIT — the renowned “Rensselaer Polytechnic Institute of Boston” [*] — studying why dry spaghetti fractures the way it does. Like many great problems, it sounds ridiculous to study at first. Who cares why, basically, you can’t snap a dry spaghetti strand in two equal pieces by bending it at the edges? The problem has familiarity to it and seems to have little else. But then you realize this is a matter of how materials work, and how they break. And realize it’s a great question. It’s easy to understand and subtle to solve.

And then, how about quaternions? Everybody loves quaternions. Well, @SheckyR here links to an article from, The Many Modern Uses of Quaternions. It’s some modern uses anyway. The major uses for quaternions are in rotations. They’re rather good at representing rotations. And they’re really good at representing doing several rotations, along different axes, in a row.

The article finishes with (as teased in the tweet above) a report of an electric toothbrush that should keep track of positions inside the user’s head, even as the head rotates. This is intriguing. I say as a person who’s reluctantly started using an electric toothbrush. I’m one of those who brushes, manually, too hard, to the point of damaging my gums. The electric toothbrush makes that harder to do. I’m not sure how an orientation-aware electric toothbrush will improve the situation any, but I’m open-minded.

[*] I went to graduate school at Rensselaer Polytechnic Institute, the “RPI of New York”. The school would be a rival to MIT if RPI had any self-esteem. I’m guessing, as I never went to a school that had self-esteem.

And What I’ve Been Reading

So here’s some stuff that I’ve been reading.

This one I saw through John Allen Paulos’s twitter feed. He points out that it’s like the Collatz conjecture but is, in fact, proven. If you try this yourself don’t make the mistake of giving up too soon. You might figure, like start with 12. Sum the squares of its digits and you get 5, which is neither 1 nor anything in that 4-16-37-58-89-145-42-20 cycle. Not so! Square 5 and you get 25. Square those digits and add them and you get 29. Square those digits and add them and you get 40. And what comes next?

This is about a proof of Fermat’s Theorem of Sums of Two Squares. According to it, a prime number — let’s reach deep into the alphabet and call it p — can be written as the sum of two squares if and only if p is one more than a whole multiple of four. It’s a proof by using fixed point methods. This is a fun kind of proof, at least to my sense of fun. It’s an approach that’s got a clear physical interpretation. Imagine picking up a (thin) patch of bread dough, stretching it out some and maybe rotating it, and then dropping it back on the board. There’s at least one bit of dough that’s landed in the same spot it was before. Once you see this you will never be able to just roll out dough the same way. So here the proof involves setting up an operation on integers which has a fixed point, and that the fixed point makes the property true.

John D Cook, who runs a half-dozen or so mathematics-fact-of-the-day Twitter feeds, looks into calculating the volume of an egg. It involves calculus, as finding the volume of many interesting shapes does. I am surprised to learn the volume can be written out as a formula that depends on the shape of the egg. I would have bet that it couldn’t be expressed in “closed form”. This is a slightly flexible term. It’s meant to mean the thing can be written using only normal, familiar functions. However, we pretend that the inverse hyperbolic tangent is a “normal, familiar” function.

For example, there’s the surface area of an egg. This can be worked out too, again using calculus. It can’t be written even with the inverse hyperbolic cotangent, so good luck. You have to get into numerical integration if you want an answer humans can understand.

My next mistake will be intentional, just to see how closely you are watching me.
Ashleigh Brilliant’s Pot-Shots rerun for the 15th of April, 2018. I understand people not liking Brilliant’s work but I love the embrace-the-doom attitude the strip presents.

Also, this doesn’t quite fit my Reading the Comics posts. But Ashleigh Brilliant’s Pot-Shots rerun for the 15th of April is something I’m going to use in future. I hope you find some use for it too.

Stuff I Should Read: about p-Adics

This is mostly a post for myself, so that I remember the existence of something I mean to read. I have tried downloading and putting into scattered files stuff I mean to read. I’ve also tried stuffing links of stuff I mean to read into Yojimbo. Maybe putting it here will at least let someone read the things.

Anyway, this is a short essay by Joel Abraham that’s on It’s Introduction to the p-adic Space. p-adics are a method of thinking about what the real numbers are. Why we need ways to think about what the real numbers are turn up when you think carefully about where our idea of them comes from.

It’s easy to see where the counting numbers like ‘1’ and ‘2’ and ‘3’ come from. They’re part of our evolutionary heritage, the part of mathematics that we know is understood also by apes and crows and raccoons. We understand some of it before we even have language.

With some thinking, and many people helping, we can go from these counting numbers to the idea of ‘0’. And even into the negative counting numbers like ‘-4’. And by thinking about multiplication, and how to reverse multiplication, we get fractions. Rational numbers. Positive and negative, given the chance. But then what are the irrational numbers? We can work out easily there have to be irrational numbers. We can name some of them. But how to give a clear definition of the whole mass of them? It should be more than just “also the other numbers”.

The p-adic numbers are one of ways to go about this. They start with thinking what we mean for two numbers to be “close to” one another. And thinking hard about how to write numbers. and this gets to interesting insights I don’t know as well as I’d like.

For this deficiency I blame Usenet. I first noticed p-adics in the voluminous and not particularly wise rantings of a crank poster to sci.math, back in the day. I forget what point, if any, he was trying to prove. But to first notice a subject as someone’s apparently idiosyncratic scheme of rewriting numbers so that everything we were already used to was useless, and in the service of some clearly nonsense goal (I think he was maybe trying to show how the number meant by 0.99999… was somehow different from the number meant by 1), is to badly hobble it. And I followed strongly mathematical-physics classes as an undergraduate and a graduate student. It’s easy to just miss problems of number representation. (This although p-adics could offer some advantages in numerical computing. They could make more numerically stable representations of irrational numbers.)

As I say, I want to fix that, and a friend linked to this arxiv post. And now that I’ve said stuff about it in public maybe it’ll coax me into going back and reading and understanding it all. We’ll see.

A Summer 2017 Mathematics A To Z Appendix: Are Colbert Numbers A Thing?

This is something I didn’t have space for in the proper A To Z sequence. And it’s more a question than exposition anyway. But what the heck: I like excuses to use my nice shiny art package.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

I was looking for mathematics topics I might write about if I didn’t get requests for particular letters. ‘C’ came up ‘cohomology’, but what if it hadn’t? I found an interesting-looking listing at MathWorld’s dictionary. The Colbert Numbers sounded interesting. this is a collection of very long prime numbers. Each of them has at least a million decimal digits. They relate to a conjecture by Wacław Sierpiński, who’s gone months without a mention around here.

The conjecture is about whole numbers that are equal to k \cdot 2^n + 1 for some whole numbers ‘k’ and ‘n’. Are there choices of ‘k’ for which, no matter what positive whole number ‘n’ you pick, k \cdot 2^n + 1 is never a prime number? (‘k’ has to meet some extra conditions.) I’m not going to explain why this is interesting because I don’t know. It’s a number theory question. They’re all strange and interesting questions in their ways. If I were writing an essay about Colbert Numbers I’d have figured that out.

Thing is we believe we know what the smallest possible ‘k’ is. We think that the smallest possible Sierpiński number is 78,557. We don’t have this quite proved, though. There are some numbers that might be prime numbers of the form k \cdot 2^n + 1 for some ‘k’ and some ‘n’. There was a set of seventeen possible candidates of numbers smaller than 78,557 that might be Sierpiński numbers. If those candidates could be ruled out then we’d have proved 78,557 was it. That’s easy to imagine. Find for each of them a number ‘n’ so that the candidate times 2n plus one was a prime number. But finding big prime numbers is hard. This turned into a distributed-computing search. This would evaluate these huge numbers and find whether they were prime numbers. (The project, “Seventeen Or Bust”, was destroyed by computer failure in 2016. Attempts to verify the work done, and continue it, are underway.) There are six possible candidates left.

MathWorld says that the seventeen cases that had to be checked were named Colbert Numbers. This was in honor of Stephen T Colbert, the screamingly brilliant character host of The Colbert Report. (Ask me sometime about the Watership Down anecdote.) It’s a plausible enough claim. Part of Stephen T Colbert’s persona was demanding things be named for him. And he’d take appropriate delight in having minor but interesting things named for him. Treadmills on the space station. Minor-league hockey team mascots. A class of numbers for proving a 60-year-old mathematical conjecture is exactly the sort of thing that would get his name and attention.

But here’s my problem. Who named them Colbert Numbers? MathWorld doesn’t say. Wikipedia doesn’t say. The best I can find with search engines doesn’t say. When were they named Colbert Numbers? Again, no answers. Poking around fan sites for The Colbert Report — where you’d expect the naming of stuff in his honor to be mentioned — doesn’t turn up anything. Does anyone call them Colbert Numbers? I mean outside people who’ve skimmed MathWorld’s glossary for topics with intersting names?

I don’t mean to sound overly skeptical here. But, like, I know there’s a class of science fiction fans who like to explain how telerobotics engineers name their hardware “waldoes”. This is in honor of a character in a Robert Heinlein story I never read either. I’d accepted that without much interest until Google’s US Patent search became a thing. One afternoon I noticed that if telerobotics engineers do call their hardware “waldoes” they never use the term in patent applications. Is it possible that someone might have slipped a joke in to Wikipedia or something and had it taken seriously? Certainly. What amounts to a Wikipedia prank briefly gave the coati — an obscure-to-the-United-States animal that I like — the nickname of “Brazilian aardvark”. There are surely other instances of Wikipedia-generated pranks becoming “real” things.

So I would like to know. Who named Colbert Numbers that, and when, and were they — as seems obvious, but you never know — named for Stephen T Colbert? Or is this an example of Wikiality, the sense that reality can be whatever enough people decide to believe is true, as described by … well, Stephen T Colbert?

The Summer 2017 Mathematics A To Z: Sárközy’s Theorem

Gaurish, of For the love of Mathematics, gives me another chance to talk number theory today. Let’s see how that turns out.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Sárközy’s Theorem.

I have two pieces to assemble for this. One is in factors. We can take any counting number, a positive whole number, and write it as the product of prime numbers. 2038 is equal to the prime 2 times the prime 1019. 4312 is equal to 2 raised to the third power times 7 raised to the second times 11. 1040 is 2 to the fourth power times 5 times 13. 455 is 5 times 7 times 13.

There are many ways to divide up numbers like this. Here’s one. Is there a square number among its factors? 2038 and 455 don’t have any. They’re each a product of prime numbers that are never repeated. 1040 has a square among its factors. 2 times 2 divides into 1040. 4312, similarly, has a square: we can write it as 2 squared times 2 times 7 squared times 11. So that is my first piece. We can divide counting numbers into squarefree and not-squarefree.

The other piece is in binomial coefficients. These are numbers, often quite big numbers, that get dumped on the high school algebra student as she tries to work with some expression like (a + b)^n . They’re also dumped on the poor student in calculus, as something about Newton’s binomial coefficient theorem. Which we hear is something really important. In my experience it wasn’t explained why this should rank up there with, like, the differential calculus. (Spoiler: it’s because of polynomials.) But it’s got some great stuff to it.

Binomial coefficients are among those utility players in mathematics. They turn up in weird places. In dealing with polynomials, of course. They also turn up in combinatorics, and through that, probability. If you run, for example, 10 experiments each of which could succeed or fail, the chance you’ll get exactly five successes is going to be proportional to one of these binomial coefficients. That they touch on polynomials and probability is a sign we’re looking at a thing woven into the whole universe of mathematics. We saw them some in talking, last A-To-Z around, about Yang Hui’s Triangle. That’s also known as Pascal’s Triangle. It has more names too, since it’s been found many times over.

The theorem under discussion is about central binomial coefficients. These are one specific coefficient in a row. The ones that appear, in the triangle, along the line of symmetry. They’re easy to describe in formulas. for a whole number ‘n’ that’s greater than or equal to zero, evaluate what we call 2n choose n:

{{2n} \choose{n}} =  \frac{(2n)!}{(n!)^2}

If ‘n’ is zero, this number is \frac{0!}{(0!)^2} or 1. If ‘n’ is 1, this number is \frac{2!}{(1!)^2} or 2. If ‘n’ is 2, this number is \frac{4!}{(2!)^2} 6. If ‘n’ is 3, this number is (sparing the formula) 20. The numbers keep growing. 70, 252, 924, 3432, 12870, and so on.

So. 1 and 2 and 6 are squarefree numbers. Not much arguing that. But 20? That’s 2 squared times 5. 70? 2 times 5 times 7. 252? 2 squared times 3 squared times 7. 924? That’s 2 squared times 3 times 7 times 11. 3432? 2 cubed times 3 times 11 times 13; there’s a 2 squared in there. 12870? 2 times 3 squared times it doesn’t matter anymore. It’s not a squarefree number.

There’s a bunch of not-squarefree numbers in there. The question: do we ever stop seeing squarefree numbers here?

So here’s Sárközy’s Theorem. It says that this central binomial coefficient {{2n} \choose{n}} is never squarefree as long as ‘n’ is big enough. András Sárközy showed in 1985 that this was true. How big is big enough? … We have a bound, at least, for this theorem. If ‘n’ is larger than the number 2^{8000} then the corresponding coefficient can’t be squarefree. It might not surprise you that the formulas involved here feature the Riemann Zeta function. That always seems to turn up for questions about large prime numbers.

That’s a common state of affairs for number theory problems. Very often we can show that something is true for big enough numbers. I’m not sure there’s a clear reason why. When numbers get large enough it can be more convenient to deal with their logarithms, I suppose. And those look more like the real numbers than the integers. And real numbers are typically easier to prove stuff about. Maybe that’s it. This is vague, yes. But to ask ‘why’ some things are easy and some are hard to prove is a hard question. What is a satisfying ’cause’ here?

It’s tempting to say that since we know this is true for all ‘n’ above a bound, we’re done. We can just test all the numbers below that bound, and the rest is done. You can do a satisfying proof this way: show that eventually the statement is true, and show all the special little cases before it is. This particular result is kind of useless, though. 2^{8000} is a number that’s something like 241 digits long. For comparison, the total number of things in the universe is something like a number about 80 digits long. Certainly not more than 90. It’d take too long to test all those cases.

That’s all right. Since Sárközy’s proof in 1985 there’ve been other breakthroughs. In 1988 P Goetgheluck proved it was true for a big range of numbers: every ‘n’ that’s larger than 4 and less than 2^{42,205,184} . That’s a number something more than 12 million digits long. In 1991 I Vardi proved we had no squarefree central binomial coefficients for ‘n’ greater than 4 and less than 2^{774,840,978} , which is a number about 233 million digits long. And then in 1996 Andrew Granville and Olivier Ramare showed directly that this was so for all ‘n’ larger than 4.

So that 70 that turned up just a few lines in is the last squarefree one of these coefficients.

Is this surprising? Maybe, maybe not. I’ll bet most of you didn’t have an opinion on this topic twenty minutes ago. Let me share something that did surprise me, and continues to surprise me. In 1974 David Singmaster proved that any integer divides almost all the binomial coefficients out there. “Almost all” is here a term of art, but it means just about what you’d expect. Imagine the giant list of all the numbers that can be binomial coefficients. Then pick any positive integer you like. The number you picked will divide into so many of the giant list that the exceptions won’t be noticeable. So that square numbers like 4 and 9 and 16 and 25 should divide into most binomial coefficients? … That’s to be expected, suddenly. Into the central binomial coefficients? That’s not so obvious to me. But then so much of number theory is strange and surprising and not so obvious.

The Summer 2017 Mathematics A To Z: Prime Number

Gaurish, host of, For the love of Mathematics, gives me another topic for today’s A To Z entry. I think the subject got away from me. But I also like where it got.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Prime Number.

Something about ‘5’ that you only notice when you’re a kid first learning about numbers. You know that it’s a prime number because it’s equal to 1 times 5 and nothing else. You also know that once you introduce fractions, it’s equal to all kinds of things. It’s 10 times one-half and it’s 15 times one-third and it’s 2.5 times 2 and many other things. Why, you might ask the teacher, is it a prime number if it’s got a million billion trillion different factors? And when every other whole number has as many factors? If you get to the real numbers it’s even worse yet, although when you’re a kid you probably don’t realize that. If you ask, the teacher probably answers that it’s only the whole numbers that count for saying whether something is prime or not. And, like, 2.5 can’t be considered anything, prime or composite. This satisfies the immediate question. It doesn’t quite get at the underlying one, though. Why do integers have prime numbers while real numbers don’t?

To maybe have a prime number we need a ring. This is a creature of group theory, or what we call “algebra” once we get to college. A ring consists of a set of elements, and a rule for adding them together, and a rule for multiplying them together. And I want this ring to have a multiplicative identity. That’s some number which works like ‘1’: take something, multiply it by that, and you get that something back again. Also, I want this multiplication rule to commute. That is, the order of multiplication doesn’t affect what the result is. (If the order matters then everything gets too complicated to deal with.) Let me say the things in the set are numbers. It turns out (spoiler!) they don’t have to be. But that’s how we start out.

Whether the numbers in a ring are prime or not depends on the multiplication rule. Let’s take a candidate number that I’ll call ‘a’ to make my writing easier. If the only numbers whose product is ‘a’ are the pair of ‘a’ and the multiplicative identity, then ‘a’ is prime. If there’s some other pair of numbers that give you ‘a’, then ‘a’ is not prime.

The integers — the positive and negative whole numbers, including zero — are a ring. And they have prime numbers just like you’d expect, if we figure out some rule about how to deal with the number ‘-1’. There are many other rings. There’s a whole family of rings, in fact, so commonly used that they have shorthand. Mathematicians write them as “Zn”, where ‘n’ is some whole number. They’re the integers, modulo ‘n’. That is, they’re the whole numbers from ‘0’ up to the number ‘n-1’, whatever that is. Addition and multiplication work as they do with normal arithmetic, except that if the result is less than ‘0’ we add ‘n’ to it. If the result is more than ‘n-1’ we subtract ‘n’ from it. We repeat that until the result is something from ‘0’ to ‘n-1’, inclusive.

(We use the letter ‘Z’ because it’s from the German word for numbers, and a lot of foundational work was done by German-speaking mathematicians. Alternatively, we might write this set as “In”, where “I” stands for integers. If that doesn’t satisfy, we might write this set as “Jn”, where “J” stands for integers. This is because it’s only very recently that we’ve come to see “I” and “J” as different letters rather than different ways to write the same letter.)

These modulo arithmetics are legitimate ones, good reliable rings. They make us realize how strange prime numbers are, though. Consider the set Z4, where the only numbers are 0, 1, 2, and 3. 0 times anything is 0. 1 times anything is whatever you started with. 2 times 1 is 2. Obvious. 2 times 2 is … 0. All right. 2 times 3 is 2 again. 3 times 1 is 3. 3 times 2 is 2. 3 times 3 is 1. … So that’s a little weird. The only product that gives us 3 is 3 times 1. So 3’s a prime number here. 2 isn’t a prime number: 2 times 3 is 2. For that matter even 1 is a composite number, an unsettling consequence.

Or then Z5, where the only numbers are 0, 1, 2, 3, and 4. Here, there are no prime numbers. Each number is the product of at least one pair of other numbers. In Z6 we start to have prime numbers again. But Z7? Z8? I recommend these questions to a night when your mind is too busy to let you fall asleep.

Prime numbers depend on context. In the crowded universe of all the rational numbers, or all the real numbers, nothing is prime. In the more austere world of the Gaussian Integers, familiar friends like ‘3’ are prime again, although ‘5’ no longer is. We recognize that as the product of 2 + \imath and 2 - \imath , themselves now prime numbers.

So given that these things do depend on context. Should we care? Or let me put it another way. Suppose we contact a wholly separate culture, one that we can’t have influenced and one not influenced by us. It’s plausible that they should have a mathematics. Would they notice prime numbers as something worth study? Or would they notice them the way we notice, say, pentagonal numbers, a thing that allows for some pretty patterns and that’s about it?

Well, anything could happen, of course. I’m inclined to think that prime numbers would be noticed, though. They seem to follow naturally from pondering arithmetic. And if one has thought of rings, then prime numbers seem to stand out. The way that Zn behaves changes in important ways if ‘n’ is a prime number. Most notably, if ‘n’ is prime (among the whole numbers), then we can define something that works like division on Zn. If ‘n’ isn’t prime (again), we can’t. This stands out. There are a host of other intriguing results that all seem to depend on whether ‘n’ is a prime number among the whole numbers. It seems hard to believe someone could think of the whole numbers and not notice the prime numbers among them.

And they do stand out, as these reliably peculiar things. Many things about them (in the whole numbers) are easy to prove. That there are infinitely many, for example, you can prove to a child. And there are many things we have no idea how to prove. That there are infinitely many primes which are exactly two more than another prime, for example. Any child can understand the question. The one who can prove it will win what fame mathematicians enjoy. If it can be proved.

They turn up in strange, surprising places. Just in the whole numbers we find some patches where there are many prime numbers in a row (Forty percent of the numbers 1 through 10!). We can find deserts; we know of a stretch of 1,113,106 numbers in a row without a single prime among them. We know it’s possible to find prime deserts as vast as we want. Say you want a gap between primes of at least size N. Then look at the numbers (N+1)! + 2, (N+1)! + 3, (N+1)! + 4, and so on, up to (N+1)! + N+1. None of those can be prime numbers. You must have a gap at least the size N. It may be larger; how we know that (N+1)! + 1 is a prime number?

No telling. Well, we can check. See if any prime number divides into (N+1)! + 1. This takes a long time to do if N is all that big. There’s no formulas we know that will make this easy or quick.

We don’t call it a “prime number” if it’s in a ring that isn’t enough like the numbers. Fair enough. We shift the name to “prime element”. “Element” is a good generic name for a thing whose identity we don’t mean to pin down too closely. I’ve talked about the Gaussian Primes already, in an earlier essay and earlier in this essay. We can make a ring out of the polynomials whose coefficients are all integers. In that, x^2 + 1 is a prime. So is x^2 - 2 . If this hasn’t given you some ideas what other polynomials might be primes, then you have something else to ponder while trying to sleep. Thinking of all the prime polynomials is likely harder than you can do, though.

Prime numbers seem to stand out, obvious and important. Humans have known about prime numbers for as long as we’ve known about multiplication. And yet there is something obscure about them. If there are cultures completely independent of our own, do they have insights which make prime numbers not such occult figures? How different would the world be if we knew all the things we now wonder about primes?

The Summer 2017 Mathematics A To Z: Elliptic Curves

Gaurish, of the For The Love Of Mathematics gives me another subject today. It’s one that isn’t about ellipses. Sad to say it’s also not about elliptic integrals. This is sad to me because I have a cute little anecdote about a time I accidentally gave my class an impossible problem. I did apologize. No, nobody solved it anyway.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Elliptic Curves.

Elliptic Curves start, of course, with polynomials. Particularly, they’re polynomials with two variables. We call the ‘x’ and ‘y’ because we have no reason to be difficult. They’re of at most third degree. That is, we can have terms like ‘x’ and ‘y2‘ and ‘x2y’ and ‘y3‘. Something with higher powers, like, ‘x4‘ or ‘x2y2‘ — a fourth power, all together — is right out. Doesn’t matter. Start from this and we can do some slick changes of variables so that we can rewrite it to look like this:

y^2 = x^3 + Ax + B

Here, ‘A’ and ‘B’ are some numbers that don’t change for this particular curve. Also, we need it to be true that 4A^3 + 27B^2 doesn’t equal zero. It avoids problems. What we’ll be looking at are coordinates, values of ‘x’ and ‘y’ together which make this equation true. That is, it’s points on the curve. If you pick some real numbers ‘A’ and ‘B’ and draw all the values of ‘x’ and ‘y’ that make the equation true you get … well, there’s different shapes. They all look like those microscope photos of a water drop emerging and falling from a tap, only rotated clockwise ninety degrees.

So. Pick any of these curves that you like. Pick a point. I’m going to name your point ‘P’. Now pick a point once more. I’m going to name that point ‘Q’. Now draw a line from P through Q. Keep drawing it. It’ll cross the original elliptic curve again. And that point is … not actually special. What is special is the reflection of that point. That is, the same x-coordinate, but flip the plus or minus sign for the y-coordinate. (WARNING! Do not call it “the reflection” at your thesis defense! Call it the “conjugate” point. It means “reflection”.) Your elliptic curve will be symmetric around the x-axis. If, say, the point with x-coordinate 4 and y-coordinate 3 is on the curve, so is the point with x-coordinate 4 and y-coordinate -3. So that reflected point is … something special.

Kind of a curved-out less-than-sign shape.
y^2 = x^3 - 1 . The water drop bulges out from the surface.

This lets us do something wonderful. We can think of this reflected point as the sum of your ‘P’ and ‘Q’. You can ‘add’ any two points on the curve and get a third point. This means we can do something that looks like addition for points on the elliptic curve. And this means the points on this curve are a group, and we can bring all our group-theory knowledge to studying them. It’s a commutative group, too; ‘P’ added to ‘Q’ leads to the same point as ‘Q’ added to ‘P’.

Let me head off some clever thoughts that make fair objections. What if ‘P’ and ‘Q’ are already reflections, so the line between them is vertical? That never touches the original elliptic curve again, right? Yeah, fair complaint. We patch this by saying that there’s one more point, ‘O’, that’s off “at infinity”. Where is infinity? It’s wherever your vertical lines end. Shut up, this can too be made rigorous. In any case it’s a common hack for this sort of problem. When we add that, everything’s nice. The ‘O’ serves the role in this group that zero serves in arithmetic: the sum of point ‘O’ and any point ‘P’ is going to be ‘P’ again.

Second clever thought to head off: what if ‘P’ and ‘Q’ are the same point? There’s infinitely many lines that go through a single point so how do we pick one to find an intersection with the elliptic curve? Huh? If you did that, then we pick the tangent line to the elliptic curve that touches ‘P’, and carry on as before.

The curved-out less-than-sign shape has a noticeable c-shaped bulge on the end.
y^2 = x^3 + 1 . The water drop is close to breaking off, but surface tension has not yet pinched off the falling form.

There’s more. What kind of number is ‘x’? Or ‘y’? I’ll bet that you figured they were real numbers. You know, ordinary stuff. I didn’t say what they were, so left it to our instinct, and that usually runs toward real numbers. Those are what I meant, yes. But we didn’t have to. ‘x’ and ‘y’ could be in other sets of numbers too. They could be complex-valued numbers. They could be just the rational numbers. They could even be part of a finite collection of possible numbers. As the equation y^2 = x^3 + Ax + B is something meaningful (and some technical points are met) we can carry on. The elliptical curves, and the points we “add” on them, might not look like the curves we started with anymore. They might not look like anything recognizable anymore. But the logic continues to hold. We still create these groups out of the points on these lines intersecting a curve.

By now you probably admit this is neat stuff. You may also think: so what? We can take this thing you never thought about, draw points and lines on it, and make it look very loosely kind of like just adding numbers together. Why is this interesting? No appreciation just for the beauty of the structure involved? Well, we live in a fallen world.

It comes back to number theory. The modern study of Diophantine equations grows out of studying elliptic curves on the rational numbers. It turns out the group of points you get for that looks like a finite collection of points with some collection of integers hanging on. How long that collection of numbers is is called the ‘rank’, and there are deep mysteries at work. We know there are elliptic equations that have a rank as big as 28. Nobody knows if the rank can be arbitrary high, though. And I believe we don’t even know if there are any curves with rank of, like, 27, or 25.

Yeah, I’m still sensing skepticism out there. Fine. We’ll go back to the only part of number theory everybody agrees is useful. Encryption. We have roughly the same goals for every encryption scheme. We want it to be easy to encode a message. We want it to be easy to decode the message if you have the key. We want it to be hard to decode the message if you don’t have the key.

The curved-out sign has a bulge with convex loops to it, so that it resembles the cut of a jigsaw puzzle piece.
y^2 = 3x^2 - 3x + 3 . The water drop is almost large enough that its weight overcomes the surface tension holding it to the main body of water.

Take something inside one of these elliptic curve groups. Especially one that’s got a finite field. Let me call your thing ‘g’. It’s really easy for you, knowing what ‘g’ is and what your field is, to raise it to a power. You can pretty well impress me by sharing the value of ‘g’ raised to some whole number ‘m’. Call that ‘h’.

Why am I impressed? Because if all I know is ‘h’, I have a heck of a time figuring out what ‘g’ is. Especially on these finite field groups there’s no obvious connection between how big ‘h’ is and how big ‘g’ is and how big ‘m’ is. Start with a big enough finite field and you can encode messages in ways that are crazy hard to crack.

We trust. At least, if there are any ways to break the code quickly, nobody’s shared them. And there’s one of those enormous-money-prize awards waiting for someone who does know how to break such a code quickly. (I don’t know which. I’m going by what I expect from people.)

And then there’s fame. These were used to prove Fermat’s Last Theorem. Suppose there are some non-boring numbers ‘a’, ‘b’, and ‘c’, so that for some prime number ‘p’ that’s five or larger, it’s true that a^p + b^p = c^p . (We can separately prove Fermat’s Last Theorem for a power that isn’t a prime number, or a power that’s 3 or 4.) Then this implies properties about the elliptic curve:

y^2 = x(x - a^p)(x + b^p)

This is a convenient way of writing things since it showcases the ap and bp. It’s equal to:

y^2 = x^3 + \left(b^p - a^p\right)x^2 + a^p b^p x

(I was so tempted to leave an arithmetic error in there so I could make sure someone commented.)

A little ball off to the side of a curved-out less-than-sign shape.
y^2 = 3x^3 - 4x . The water drop has broken off, and the remaining surface rebounds to its normal meniscus.

If there’s a solution to Fermat’s Last Theorem, then this elliptic equation can’t be modular. I don’t have enough words to explain what ‘modular’ means here. Andrew Wiles and Richard Taylor showed that the equation was modular. So there is no solution to Fermat’s Last Theorem except the boring ones. (Like, where ‘b’ is zero and ‘a’ and ‘c’ equal each other.) And it all comes from looking close at these neat curves, none of which looks like an ellipse.

They’re named elliptic curves because we first noticed them when Carl Jacobi — yes, that Carl Jacobi — while studying the length of arcs of an ellipse. That’s interesting enough on its own. But it is hard. Maybe I could have fit in that anecdote about giving my class an impossible problem after all.

The Summer 2017 Mathematics A To Z: Diophantine Equations

I have another request from Gaurish, of the For The Love Of Mathematics blog, today. It’s another change of pace.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Diophantine Equations

A Diophantine equation is a polynomial. Well, of course it is. It’s an equation, or a set of equations, setting one polynomial equal to another. Possibly equal to a constant. What makes this different from “any old equation” is the coefficients. These are the constant numbers that you multiply the variables, your x and y and x2 and z8 and so on, by. To make a Diophantine equation all these coefficients have to be integers. You know one well, because it’s that x^n + y^n = z^n thing that Fermat’s Last Theorem is all about. And you’ve probably seen ax + by = 1 . It turns up a lot because that’s a line, and we do a lot of stuff with lines.

Diophantine equations are interesting. There are a couple of cases that are easy to solve. I mean, at least that we can find solutions for. ax + by = 1 , for example, that’s easy to solve. x^n + y^n = z^n it turns out we can’t solve. Well, we can if n is equal to 1 or 2. Or if x or y or z are zero. These are obvious, that is, they’re quite boring. That one took about four hundred years to solve, and the solution was “there aren’t any solutions”. This may convince you of how interesting these problems are. What, from looking at it, tells you that ax + by = 1 is simple while x^n + y^n = z^n is (most of the time) impossible?

I don’t know. Nobody really does. There are many kinds of Diophantine equation, all different-looking polynomials. Some of them are special one-off cases, like x^n + y^n = z^n . For example, there’s x^4 + y^4 + z^4 = w^4 for some integers x, y, z, and w. Leonhard Euler conjectured this equation had only boring solutions. You’ll remember Euler. He wrote the foundational work for every field of mathematics. It turns out he was wrong. It has infinitely many interesting solutions. But the smallest one is 2,682,440^4 + 15,365,639^4 + 18,796,760^4 = 20,615,673^4 and that one took a computer search to find. We can forgive Euler not noticing it.

Some are groups of equations that have similar shapes. There’s the Fermat’s Last Theorem formula, for example, which is a different equation for every different integer n. Then there’s what we call Pell’s Equation. This one is x^2 - D y^2 = 1 (or equals -1), for some counting number D. It’s named for the English mathematician John Pell, who did not discover the equation (even in the Western European tradition; Indian mathematicians were familiar with it for a millennium), did not solve the equation, and did not do anything particularly noteworthy in advancing human understanding of the solution. Pell owes his fame in this regard to Leonhard Euler, who misunderstood Pell’s revising a translation of a book discussing a solution for Pell’s authoring a solution. I confess Euler isn’t looking very good on Diophantine equations.

But nobody looks very good on Diophantine equations. Make up a Diophantine equation of your own. Use whatever whole numbers, positive or negative, that you like for your equation. Use whatever powers of however many variables you like for your equation. So you get something that looks maybe like this:

7x^2 - 20y + 18y^2 - 38z = 9

Does it have any solutions? I don’t know. Nobody does. There isn’t a general all-around solution. You know how with a quadratic equation we have this formula where you recite some incantation about “b squared minus four a c” and get any roots that exist? Nothing like that exists for Diophantine equations in general. Specific ones, yes. But they’re all specialties, crafted to fit the equation that has just that shape.

So for each equation we have to ask: is there a solution? Is there any solution that isn’t obvious? Are there finitely many solutions? Are there infinitely many? Either way, can we find all the solutions? And we have to answer them anew. What answers these have? Whether answers are known to exist? Whether answers can exist? We have to discover anew for each kind of equation. Knowing answers for one kind doesn’t help us for any others, except as inspiration. If some trick worked before, maybe it will work this time.

There are a couple usually reliable tricks. Can the equation be rewritten in some way that it becomes the equation for a line? If it can we probably have a good handle on any solutions. Can we apply modulo arithmetic to the equation? If it is, we might be able to reduce the number of possible solutions that the equation has. In particular we might be able to reduce the number of possible solutions until we can just check every case. Can we use induction? That is, can we show there’s some parameter for the equations, and that knowing the solutions for one value of that parameter implies knowing solutions for larger values? And then find some small enough value we can test it out by hand? Or can we show that if there is a solution, then there must be a smaller solution, and smaller yet, until we can either find an answer or show there aren’t any? Sometimes. Not always. The field blends seamlessly into number theory. And number theory is all sorts of problems easy to pose and hard or impossible to solve.

We name these equation after Diophantus of Alexandria, a 3rd century Greek mathematician. His writings, what we have of them, discuss how to solve equations. Not general solutions, the way we might want to solve ax^2 + bx + c = 0 , but specific ones, like 1x^2 - 5x + 6 = 0 . His books are among those whose rediscovery shaped the rebirth of mathematics. Pierre de Fermat’s scribbled his famous note in the too-small margins of Diophantus’s Arithmetica. (Well, a popular translation.)

But the field predates Diophantus, at least if we look at specific problems. Of course it does. In mathematics, as in life, any search for a source ends in a vast, marshy ambiguity. The field stays vital. If we loosen ourselves to looking at inequalities — x - Dy^2 < A , let's say — then we start seeing optimization problems. What values of x and y will make this equation most nearly true? What values will come closest to satisfying this bunch of equations? The questions are about how to find the best possible fit to whatever our complicated sets of needs are. We can't always answer. We keep searching.

The Summer 2017 Mathematics A To Z: Arithmetic

And now as summer (United States edition) reaches its closing months I plunge into the fourth of my A To Z mathematics-glossary sequences. I hope I know what I’m doing! Today’s request is one of several from Gaurish, who’s got to be my top requester for mathematical terms and whom I thank for it. It’s a lot easier writing these things when I don’t have to think up topics. Gaurish hosts a fine blog, For the love of Mathematics, which you might consider reading.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.


Arithmetic is what people who aren’t mathematicians figure mathematicians do all day. I remember in my childhood a Berenstain Bears book about people’s jobs. Its mathematician was an adorable little bear adding up sums on the chalkboard, in an observatory, on the Moon. I liked every part of this. I wouldn’t say it’s the whole reason I became a mathematician but it did made the prospect look good early on.

People who aren’t mathematicians are right. At least, the bulk of what mathematics people do is arithmetic. If we work by volume. Arithmetic is about the calculations we do to evaluate or solve polynomials. And polynomials are everything that humans find interesting. Arithmetic is adding and subtracting, of multiplication and division, of taking powers and taking roots. Arithmetic is changing the units of a thing, and of breaking something into several smaller units, or of merging several smaller units into one big one. Arithmetic’s role in commerce and in finance must overwhelm the higher mathematics. Higher mathematics offers cohomologies and Ricci tensors. Arithmetic offers a budget.

This is old mathematics. There’s evidence of humans twenty thousands of years ago recording their arithmetic computations. My understanding is the evidence is ambiguous and interpretations vary. This seems fair. I assume that humans did such arithmetic then, granting that I do not know how to interpret archeological evidence. The thing is that arithmetic is older than humans. Animals are able to count, to do addition and subtraction, perhaps to do harder computations. (I crib this from The Number Sense:
How the Mind Creates Mathematics
, by Stanislas Daehaene.) We learn it first, refining our rough instinctively developed sense to something rigorous. At least we learn it at the same time we learn geometry, the other branch of mathematics that must predate human existence.

The primality of arithmetic governs how it becomes an adjective. We will have, for example, the “arithmetic progression” of terms in a sequence. This is a sequence of numbers such as 1, 3, 5, 7, 9, and so on. Or 4, 9, 14, 19, 24, 29, and so on. The difference between one term and its successor is the same as the difference between the predecessor and this term. Or we speak of the “arithmetic mean”. This is the one found by adding together all the numbers of a sample and dividing by the number of terms in the sample. These are important concepts, useful concepts. They are among the first concepts we have when we think of a thing. Their familiarity makes them easy tools to overlook.

Consider the Fundamental Theorem of Arithmetic. There are many Fundamental Theorems; that of Algebra guarantees us the number of roots of a polynomial equation. That of Calculus guarantees us that derivatives and integrals are joined concepts. The Fundamental Theorem of Arithmetic tells us that every whole number greater than one is equal to one and only one product of prime numbers. If a number is equal to (say) two times two times thirteen times nineteen, it cannot also be equal to (say) five times eleven times seventeen. This may seem uncontroversial. The budding mathematician will convince herself it’s so by trying to work out all the ways to write 60 as the product of prime numbers. It’s hard to imagine mathematics for which it isn’t true.

But it needn’t be true. As we study why arithmetic works we discover many strange things. This mathematics that we know even without learning is sophisticated. To build a logical justification for it requires a theory of sets and hundreds of pages of tight reasoning. Or a theory of categories and I don’t even know how much reasoning. The thing that is obvious from putting a couple objects on a table and then a couple more is hard to prove.

As we continue studying arithmetic we start to ponder things like Goldbach’s Conjecture, about even numbers (other than two) being the sum of exactly two prime numbers. This brings us into number theory, a land of fascinating problems. Many of them are so accessible you could pose them to a person while waiting in a fast-food line. This befits a field that grows out of such simple stuff. Many of those are so hard to answer that no person knows whether they are true, or are false, or are even answerable.

And it splits off other ideas. Arithmetic starts, at least, with the counting numbers. It moves into the whole numbers and soon all the integers. With division we soon get rational numbers. With roots we soon get certain irrational numbers. A close study of this implies there must be irrational numbers that must exist, at least as much as “four” exists. Yet they can’t be reached by studying polynomials. Not polynomials that don’t already use these exotic irrational numbers. These are transcendental numbers. If we were to say the transcendental numbers were the only real numbers we would be making only a very slight mistake. We learn they exist by thinking long enough and deep enough about arithmetic to realize there must be more there than we realized.

Thought compounds thought. The integers and the rational numbers and the real numbers have a structure. They interact in certain ways. We can look for things that are not numbers, but which follow rules like that for addition and for multiplication. Sometimes even for powers and for roots. Some of these can be strange: polynomials themselves, for example, follow rules like those of arithmetic. Matrices, which we can represent as grids of numbers, can have powers and even something like roots. Arithmetic is inspiration to finding mathematical structures that look little like our arithmetic. We can find things that follow mathematical operations but which don’t have a Fundamental Theorem of Arithmetic.

And there are more related ideas. These are often very useful. There’s modular arithmetic, in which we adjust the rules of addition and multiplication so that we can work with a finite set of numbers. There’s floating point arithmetic, in which we set machines to do our calculations. These calculations are no longer precise. But they are fast, and reliable, and that is often what we need.

So arithmetic is what people who aren’t mathematicians figure mathematicians do all day. And they are mistaken, but not by much. Arithmetic gives us an idea of what mathematics we can hope to understand. So it structures the way we think about mathematics.

Reading the Comics, June 3, 2017: Feast Week Conclusion Edition

And now finally I can close out last week’s many mathematically-themed comic strips. I had hoped to post this Thursday, but the Why Stuff Can Orbit supplemental took up my writing energies and eventually timeslot. This also ends up being the first time I’ve had one of Joe Martin’s comic strips since the Houston Chronicle ended its comics pages and I admit I’m not sure how I’m going to work this. I’m also not perfectly sure what the comic strip means.

So Joe Martin’s Mister Boffo for the 1st of June seems to be about a disastrous mathematics exam with a kid bad enough he hasn’t even got numbers exactly to express the score. Also I’m not sure there is a way to link to the strip I mean exactly; the archives for Martin’s strips are not … organized the way I would have done. Well, they’re his business.

A Time To Worry: '[Our son] says he got a one-de-two-three-z on the math test.'
So Joe Martin’s Mister Boffo for the 1st of June, 2017. The link is probably worthless, since I can’t figure out how to work its archives. Good luck yourselves with it.

Greg Evans’s Luann Againn for the 1st reruns the strip from the 1st of June, 1989. It’s your standard resisting-the-word-problem joke. On first reading the strip I didn’t get what the problem was asking for, and supposed that the text had garbled the problem, if there were an original problem. That was my sloppiness is all; it’s a perfectly solvable question once you actually read it.

J C Duffy’s Lug Nuts for the 1st — another day that threatened to be a Reading the Comics post all on its own — is a straggler Pi Day joke. It’s just some Dadaist clowning about.

Doug Bratton’s Pop Culture Shock Therapy for the 1st is a wordplay joke that uses word problems as emblematic of mathematics. I’m okay with that; much of the mathematics that people actually want to do amounts to extracting from a situation the things that are relevant and forming an equation based on that. This is what a word problem is supposed to teach us to do.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 1st — maybe I should have done a Reading the Comics for that day alone — riffs on the idle speculation that God would be a mathematician. It does this by showing a God uninterested in two logical problems. The first is the question of whether there’s an odd perfect number. Perfect numbers are these things that haunt number theory. (Everything haunts number theory.) It starts with idly noticing what happens if you pick a number, find the numbers that divide into it, and add those up. For example, 4 can be divided by 1 and 2; those add to 3. 5 can only be divided by 1; that adds to 1. 6 can be divided by 1, 2, and 3; those add to 6. For a perfect number the divisors add up to the original number. Perfect numbers look rare; for a thousand years or so only four of them (6, 28, 496, and 8128) were known to exist.

All the perfect numbers we know of are even. More, they’re all numbers that can be written as the product 2^{p - 1} \cdot \left(2^p - 1\right) for certain prime numbers ‘p’. (They’re the ones for which 2^p - 1 is itself a prime number.) What we don’t know, and haven’t got a hint about proving, is whether there are any odd prime numbers. We know some things about odd perfect numbers, if they exist, the most notable of them being that they’ve got to be incredibly huge numbers, much larger than a googol, the standard idea of an incredibly huge number. Presumably an omniscient God would be able to tell whether there were an odd perfect number, or at least would be able to care whether there were. (It’s also not known if there are infinitely many perfect numbers, by the way. This reminds us that number theory is pretty much nothing but a bunch of easy-to-state problems that we can’t solve.)

Some miscellaneous other things we know about an odd perfect number, other than whether any exist: if there are odd perfect numbers, they’re not divisible by 105. They’re equal to one more than a whole multiple of 12. They’re also 117 more than a whole multiple of 468, and they’re 81 more than a whole multiple of 324. They’ve got to have at least 101 prime factors, and there have to be at least ten distinct prime factors. There have to be at least twelve distinct prime factors if 3 isn’t a factor of the odd perfect number. If this seems like a screwy list of things to know about a thing we don’t even know exists, then welcome to number theory.

The beard question I believe is a reference to the logician’s paradox. This is the one postulating a village in which the village barber shaves all, but only, the people who do not shave themselves. Given that, who shaves the barber? It’s an old joke, but if you take it seriously you learn something about the limits of what a system of logic can tell you about itself.

Tiger: 'I've got two plus four hours of homework. I won't be finished until ten minus three o'clock, or maybe even six plus one and a half o'clock.' Punkin: 'What subject?' Tiger: 'Arithmetic, stupid!'
Bud Blake’s Tiger rerun for the 2nd of June, 2017. Bonus arithmetic problem: what’s the latest time that this could be? Also, don’t you like how the dog’s tail spills over the panel borders twice? I do.

Bud Blake’s Tiger rerun for the 2nd has Tiger’s arithmetic homework spill out into real life. This happens sometimes.

Officer Pupp: 'That Mouse is most sure an oaf of awful dumbness, Mrs Kwakk Wakk - y'know that?' Mrs Kwakk Wakk: 'By what means do you find proof of this, Officer Pupp?' 'His sense of speed is insipid - he doesn't seem to know that if I ran 60 miles an hour, and he only 40, that I would eventually catch up to him.' 'No-' 'Yes- I tell you- yes.' 'He seemed to know that a brick going 60 would catch up to a kat going 40.' 'Oh, he did, did he?' 'Why, yes.'
George Herriman’s Krazy Kat for the 10th of July, 1939 and rerun the 2nd of June, 2017. I realize that by contemporary standards this is a very talky comic strip. But read Officer Pupp’s dialogue, particularly in the second panel. It just flows with a wonderful archness.

George Herriman’s Krazy Kat for the 10th of July, 1939 was rerun the 2nd of June. I’m not sure that it properly fits here, but the talk about Officer Pupp running at 60 miles per hour and Ignatz Mouse running forty and whether Pupp will catch Mouse sure reads like a word problem. Later strips in the sequence, including the ways that a tossed brick could hit someone who’d be running faster than it, did not change my mind about this. Plus I like Krazy Kat so I’ll take a flimsy excuse to feature it.

The End 2016 Mathematics A To Z: Monster Group

Today’s is one of my requested mathematics terms. This one comes to us from group theory, by way of Gaurish, and as ever I’m thankful for the prompt.

Monster Group.

It’s hard to learn from an example. Examples are great, and I wouldn’t try teaching anything subtle without one. Might not even try teaching the obvious without one. But a single example is dangerous. The learner has trouble telling what parts of the example are the general lesson to learn and what parts are just things that happen to be true for that case. Having several examples, of different kinds of things, saves the student. The thing in common to many different examples is the thing to retain.

The mathematics major learns group theory in Introduction To Not That Kind Of Algebra, MAT 351. A group extracts the barest essence of arithmetic: a bunch of things and the ability to add them together. So what’s an example? … Well, the integers do nicely. What’s another example? … Well, the integers modulo two, where the only things are 0 and 1 and we know 1 + 1 equals 0. What’s another example? … The integers modulo three, where the only things are 0 and 1 and 2 and we know 1 + 2 equals 0. How about another? … The integers modulo four? Modulo five?

All true. All, also, basically the same thing. The whole set of integers, or of real numbers, are different. But as finite groups, the integers modulo anything are nice easy to understand groups. They’re known as Cyclic Groups for reasons I’ll explain if asked. But all the Cyclic Groups are kind of the same.

So how about another example? And here we get some good ones. There’s the Permutation Groups. These are fun. You start off with a set of things. You can label them anything you like, but you’re daft if you don’t label them the counting numbers. So, say, the set of things 1, 2, 3, 4, 5. Start with them in that order. A permutation is the swapping of any pair of those things. So swapping, say, the second and fifth things to get the list 1, 5, 3, 4, 2. The collection of all the swaps you can make is the Permutation Group on this set of things. The things in the group are not 1, 2, 3, 4, 5. The things in the permutation group are “swap the second and fifth thing” or “swap the third and first thing” or “swap the fourth and the third thing”. You maybe feel uneasy about this. That’s all right. I suggest playing with this until you feel comfortable because it is a lot of fun to play with. Playing in this case mean writing out all the ways you can swap stuff, which you can always do as a string of swaps of exactly two things.

(Some people may remember an episode of Futurama that involved a brain-swapping machine. Or a body-swapping machine, if you prefer. The gimmick of the episode is that two people could only swap bodies/brains exactly one time. The problem was how to get everybody back in their correct bodies. It turns out to be possible to do, and one of the show’s writers did write a proof of it. It’s shown on-screen for a moment. Many fans were awestruck by an episode of the show inspiring a Mathematical Theorem. They’re overestimating how rare theorems are. But it is fun when real mathematics gets done as a side effect of telling a good joke. Anyway, the theorem fits well in group theory and the study of these permutation groups.)

So the student wanting examples of groups can get the Permutation Group on three elements. Or the Permutation Group on four elements. The Permutation Group on five elements. … You kind of see, this is certainly different from those Cyclic Groups. But they’re all kind of like each other.

An “Alternating Group” is one where all the elements in it are an even number of permutations. So, “swap the second and fifth things” would not be in an alternating group. But “swap the second and fifth things, and swap the fourth and second things” would be. And so the student needing examples can look at the Alternating Group on two elements. Or the Alternating Group on three elements. The Alternating Group on four elements. And so on. It’s slightly different from the Permutation Group. It’s certainly different from the Cyclic Group. But still, if you’ve mastered the Alternating Group on five elements you aren’t going to see the Alternating Group on six elements as all that different.

Cyclic Groups and Alternating Groups have some stuff in common. Permutation Groups not so much and I’m going to leave them in the above paragraph, waving, since they got me to the Alternating Groups I wanted.

One is that they’re finite. At least they can be. I like finite groups. I imagine students like them too. It’s nice having a mathematical thing you can write out in full and know you aren’t missing anything.

The second thing is that they are, or they can be, “simple groups”. That’s … a challenge to explain. This has to do with the structure of the group and the kinds of subgroup you can extract from it. It’s very very loosely and figuratively and do not try to pass this off at your thesis defense kind of like being a prime number. In fact, Cyclic Groups for a prime number of elements are simple groups. So are Alternating Groups on five or more elements.

So we get to wondering: what are the finite simple groups? Turns out they come in four main families. One family is the Cyclic Groups for a prime number of things. One family is the Alternating Groups on five or more things. One family is this collection called the Chevalley Groups. Those are mostly things about projections: the ways to map one set of coordinates into another. We don’t talk about them much in Introduction To Not That Kind Of Algebra. They’re too deep into Geometry for people learning Algebra. The last family is this collection called the Twisted Chevalley Groups, or the Steinberg Groups. And they .. uhm. Well, I never got far enough into Geometry I’m Guessing to understand what they’re for. I’m certain they’re quite useful to people working in the field of order-three automorphisms of the whatever exactly D4 is.

And that’s it. That’s all the families there are. If it’s a finite simple group then it’s one of these. … Unless it isn’t.

Because there are a couple of stragglers. There are a few finite simple groups that don’t fit in any of the four big families. And it really is only a few. I would have expected an infinite number of weird little cases that don’t belong to a family that looks similar. Instead, there are 26. (27 if you decide a particular one of the Steinberg Groups doesn’t really belong in that family. I’m not familiar enough with the case to have an opinion.) Funny number to have turn up. It took ten thousand pages to prove there were just the 26 special cases. I haven’t read them all. (I haven’t read any of the pages. But my Algebra professors at Rutgers were proud to mention their department’s work in tracking down all these cases.)

Some of these cases have some resemblance to one another. But not enough to see them as a family the way the Cyclic Groups are. We bundle all these together in a wastebasket taxon called “the sporadic groups”. The first five of them were worked out in the 1860s. The last of them was worked out in 1980, seven years after its existence was first suspected.

The sporadic groups all have weird sizes. The smallest one, known as M11 (for “Mathieu”, who found it and four of its siblings in the 1860s) has 7,920 things in it. They get enormous soon after that.

The biggest of the sporadic groups, and the last one described, is the Monster Group. It’s known as M. It has a lot of things in it. In particular it’s got 808,017,424,794,512,875,886,459,904,961,710,757,005,754,368,000,000,000 things in it. So, you know, it’s not like we’ve written out everything that’s in it. We’ve just got descriptions of how you would write out everything in it, if you wanted to try. And you can get a good argument going about what it means for a mathematical object to “exist”, or to be “created”. There are something like 1054 things in it. That’s something like a trillion times a trillion times the number of stars in the observable universe. Not just the stars in our galaxy, but all the stars in all the galaxies we could in principle ever see.

It’s one of the rare things for which “Brobdingnagian” is an understatement. Everything about it is mind-boggling, the sort of thing that staggers the imagination more than infinitely large things do. We don’t really think of infinitely large things; we just picture “something big”. A number like that one above is definite, and awesomely big. Just read off the digits of that number; it sounds like what we imagine infinity ought to be.

We can make a chart, called the “character table”, which describes how subsets of the group interact with one another. The character table for the Monster Group is 194 rows tall and 194 columns wide. The Monster Group can be represented as this, I am solemnly assured, logical and beautiful algebraic structure. It’s something like a polyhedron in rather more than three dimensions of space. In particular it needs 196,884 dimensions to show off its particular beauty. I am taking experts’ word for it. I can’t quite imagine more than 196,883 dimensions for a thing.

And it’s a thing full of mystery. This creature of group theory makes us think of the number 196,884. The same 196,884 turns up in number theory, the study of how integers are put together. It’s the first non-boring coefficient in a thing called the j-function. It’s not coincidence. This bit of number theory and this bit of group theory are bound together, but it took some years for anyone to quite understand why.

There are more mysteries. The character table has 194 rows and columns. Each column implies a function. Some of those functions are duplicated; there are 171 distinct ones. But some of the distinct ones it turns out you can find by adding together multiples of others. There are 163 distinct ones. 163 appears again in number theory, in the study of algebraic integers. These are, of course, not integers at all. They’re things that look like complex-valued numbers: some real number plus some (possibly other) real number times the square root of some specified negative number. They’ve got neat properties. Or weird ones.

You know how with integers there’s just one way to factor them? Like, fifteen is equal to three times five and no other set of prime numbers? Algebraic integers don’t work like that. There’s usually multiple ways to do that. There are exceptions, algebraic integers that still have unique factorings. They happen only for a few square roots of negative numbers. The biggest of those negative numbers? Minus 163.

I don’t know if this 163 appearance means something. As I understand the matter, neither does anybody else.

There is some link to the mathematics of string theory. That’s an interesting but controversial and hard-to-experiment-upon model for how the physics of the universe may work. But I don’t know string theory well enough to say what it is or how surprising this should be.

The Monster Group creates a monster essay. I suppose it couldn’t do otherwise. I suppose I can’t adequately describe all its sublime mystery. Dr Mark Ronan has written a fine web page describing much of the Monster Group and the history of our understanding of it. He also has written a book, Symmetry and the Monster, to explain all this in greater depths. I’ve not read the book. But I do mean to, now.

Reading the Comics, August 19, 2016: Mathematics Signifier Edition

I know it seems like when I write these essays I spend the most time on the first comic in the bunch and give the last ones a sentence, maybe two at most. I admit when there’s a lot of comics I have to write up at once my energy will droop. But Comic Strip Master Command apparently wants the juiciest topics sent out earlier in the week. I have to follow their lead.

Stephen Beals’s Adult Children for the 14th uses mathematics to signify deep thinking. In this case Claremont, the dog, is thinking of the Riemann Zeta function. It’s something important in number theory, so longtime readers should know this means it leads right to an unsolved problem. In this case it’s the Riemann Hypothesis. That’s the most popular candidate for “what is the most important unsolved problem in mathematics right now?” So you know Claremont is a deep-thinking dog.

The big Σ ordinary people might recognize as representing “sum”. The notation means to evaluate, for each legitimate value of the thing underneath — here it’s ‘n’ — the value of the expression to the right of the Sigma. Here that’s \frac{1}{n^s} . Then add up all those terms. It’s not explicit here, but context would make clear, n is positive whole numbers: 1, 2, 3, and so on. s would be a positive number, possibly a whole number.

The big capital Pi is more mysterious. It’s Sigma’s less popular brother. It means “product”. For each legitimate value of the thing underneath it — here it’s “p” — evaluate the expression on the right. Here that’s \frac{1}{1 - \frac{1}{p^s}} . Then multiply all that together. In the context of the Riemann Zeta function, “p” here isn’t just any old number, or even any old whole number. It’s only the prime numbers. Hence the “p”. Good notation, right? Yeah.

This particular equation, once shored up with the context the symbols live in, was proved by Leonhard Euler, who proved so much you sometimes wonder if later mathematicians were needed at all. It ties in to how often whole numbers are going to be prime, and what the chances are that some set of numbers are going to have no factors in common. (Other than 1, which is too boring a number to call a factor.) But even if Claremont did know that Euler got there first, it’s almost impossible to do good new work without understanding the old.

Charlos Gary’s Working It Out for the 14th is this essay’s riff on pie charts. Or bar charts. Somewhere around here the past week I read that a French idiom for the pie chart is the “cheese chart”. That’s a good enough bit I don’t want to look more closely and find out whether it’s true. If it turned out to be false I’d be heartbroken.

Ryan North’s Dinosaur Comics for the 15th talks about everyone’s favorite physics term, entropy. Everyone knows that it tends to increase. Few advanced physics concepts feel so important to everyday life. I almost made one expression of this — Boltzmann’s H-Theorem — a Theorem Thursday post. I might do a proper essay on it yet. Utahraptor describes this as one of “the few statistical laws of physics”, which I think is a bit unfair. There’s a lot about physics that is statistical; it’s often easier to deal with averages and distributions than the mass of real messy data.

Utahraptor’s right to point out that it isn’t impossible for entropy to decrease. It can be expected not to, in time. Indeed decent scientists thinking as philosophers have proposed that “increasing entropy” might be the only way to meaningfully define the flow of time. (I do not know how decent the philosophy of this is. This is far outside my expertise.) However: we would expect at least one tails to come up if we simultaneously flipped infinitely many coins fairly. But there is no reason that it couldn’t happen, that infinitely many fairly-tossed coins might all come up heads. The probability of this ever happening is zero. If we try it enough times, it will happen. Such is the intuition-destroying nature of probability and of infinitely large things.

Tony Cochran’s Agnes on the 16th proposes to decode the Voynich Manuscript. Mathematics comes in as something with answers that one can check for comparison. It’s a familiar role. As I seem to write three times a month, this is fair enough to say to an extent. Coming up with an answer to a mathematical question is hard. Checking the answer is typically easier. Well, there are many things we can try to find an answer. To see whether a proposed answer works usually we just need to go through it and see if the logic holds. This might be tedious to do, especially in those enormous brute-force problems where the proof amounts to showing there are a hundred zillion special cases and here’s an answer for each one of them. But it’s usually a much less hard thing to do.

Johnny Hart and Brant Parker’s Wizard of Id Classics for the 17th uses what seems like should be an old joke about bad accountants and nepotism. Well, you all know how important bookkeeping is to the history of mathematics, even if I’m never that specific about it because it never gets mentioned in the histories of mathematics I read. And apparently sometime between the strip’s original appearance (the 20th of August, 1966) and my childhood the Royal Accountant character got forgotten. That seems odd given the comic potential I’d imagine him to have. Sometimes a character’s only good for a short while is all.

Mark Anderson’s Andertoons for the 18th is the Andertoons representative for this essay. Fair enough. The kid speaks of exponents as a kind of repeating oneself. This is how exponents are inevitably introduced: as multiplying a number by itself many times over. That’s a solid way to introduce raising a number to a whole number. It gets a little strained to describe raising a number to a rational number. It’s a confusing mess to describe raising a number to an irrational number. But you can make that logical enough, with effort. And that’s how we do make the idea rigorous. A number raised to (say) the square root of two is something greater than the number raised to 1.4, but less than the number raised to 1.5. More than the number raised to 1.41, less than the number raised to 1.42. More than the number raised to 1.414, less than the number raised to 1.415. This takes work, but it all hangs together. And then we ask about raising numbers to an imaginary or complex-valued number and we wave that off to a higher-level mathematics class.

Nate Fakes’s Break of Day for the 18th is the anthropomorphic-numerals joke for this essay.

Lachowski’s Get A Life for the 18th is the sudoku joke for this essay. It’s also a representative of the idea that any mathematical thing is some deep, complicated puzzle at least as challenging as calculating one’s taxes. I feel like this is a rerun, but I don’t see any copyright dates. Sudoku jokes like this feel old, but comic strips have been known to make dated references before.

Samson’s Dark Side Of The Horse for the 19th is this essay’s Dark Side Of The Horse gag. I thought initially this was a counting-sheep in a lab coat. I’m going to stick to that mistaken interpretation because it’s more adorable that way.

Reading the Comics, August 1, 2016: Kalends Edition

The last day of July and first day of August saw enough mathematically-themed comic strips to fill a standard-issue entry. The rest of the week wasn’t so well-stocked. But I’ll cover those comics on Tuesday if all goes well. This may be a silly plan, but it is a plan, and I will stick to that.

Johnny Hart’s Back To BC reprints the venerable and groundbreaking comic strip from its origins. On the 31st of July it reprinted a strip from February 1959 in which Peter discovers mathematics. The work’s elaborate, much more than we would use to solve the problem today. But it’s always like that. Newly-discovered mathematics is much like any new invention or innovation, a rickety set of things that just barely work. With time we learn better how the idea should be developed. And we become comfortable with the cultural assumptions going into the work. So we get more streamlined, faster, easier-to-use mathematics in time.

The early invention of mathematics reappears the 1st of August, in a strip from earlier in February 1959. In this case it’s the sort of word problem confusion strip that any comic with a student could do. That’s a bit disappointing but Hart had much less space than he’d have for the Sunday strip above. One must do what one can.

Mac King and Bill King’s Magic in a Minute for the 31st maybe isn’t really mathematics. I guess there’s something in the modular-arithmetic implied by it. But it depends on a neat coincidence. Follow the directions in the comic about picking a number from one to twelve and counting out the letters in the word for that number. And then the letters in the word for the number you’re pointing to, and then once again. It turns out this leads to the same number. I’d never seen this before and it’s neat that it does.

Rick Detorie’s One Big Happy rerun for the 31st features Ruthie teaching, as she will. She mentions offhand the “friendlier numbers”. By this she undoubtedly means the numbers that are attractive in some way, like being nice to draw. There are “friendly numbers”, though, as number theorists see things. These are sets of numbers. For each number in this set you get the same index if you add together all its divisors (including 1 and the original number) and divide it by the original number. For example, the divisors of six are 1, 2, 3, and 6. Add that together and you get 12; divide that by the original 6 and you get 2. The divisors of 28 are 1, 2, 4, 7, 14, and 28. Add that pile of numbers together and you get 56; divide that by the original 28 and you get 2. So 6 and 28 are friendly numbers, each the friend of the other.

As often happens with number theory there’s a lot of obvious things we don’t know. For example, we know that 1, 2, 3, 4, and 5 have no friends. But we do not know whether 10 has. Nor 14 nor 20. I do not know if it is proved whether there are infinitely many sets of friendly numbers. Nor do I know if it is proved whether there are infinitely many numbers without friends. Those last two sentences are about my ignorance, though, and don’t reflect what number theory people know. I’m open to hearing from people who know better.

There are also things called “amicable numbers”, which are easier to explain and to understand than “friendly numbers”. A pair of numbers are amicable if the sum of one number’s divisors is the other number. 220 and 284 are the smallest pair of amicable numbers. Fermat found that 17,296 and 18,416 were an amicable pair; Descartes found that 9,363,584 and 9,437,056 were. Both pairs were known to Arab mathematicians already. Amicable pairs are easy enough to produce. From the tenth century we’ve had Thâbit ibn Kurrah’s rule, which lets you generate sets of numbers. Ruthie wasn’t thinking of any of this, though, and was more thinking how much fun it is to write a 7.

Terry Border’s Bent Objects for the 1st just missed the anniversary of John Venn’s birthday and all the joke Venn Diagrams that were going around at least if your social media universe looks anything like mine.

Jon Rosenberg’s Scenes from a Multiverse for the 1st is set in “Mathpinion City”, in the “Numerically Flexible Zones”. And I appreciate it’s a joke about the politicization of science. But science and mathematics are human activities. They are culturally dependent. And especially at the dawn of a new field of study there will be long and bitter disputes about what basic terms should mean. It’s absurd for us to think that the question of whether 1 + 1 should equal 2 or 3 could even arise.

But we think that because we have absorbed ideas about what we mean by ‘1’, ‘2’, ‘3’, ‘plus’, and ‘equals’ that settle the question. There was, if I understand my mathematics history right — and I’m not happy with my reading on this — a period in which it was debated whether negative numbers should be considered as less than or greater than the positive numbers. Absurd? Thermodynamics allows for the existence of negative temperatures, and those represent extremely high-energy states, things that are hotter than positive temperatures. A thing may get hotter, from 1 Kelvin to 4 Kelvin to a million Kelvin to infinitely many Kelvin to -1000 Kelvin to -6 Kelvin. If there are intuition-defying things to consider about “negative six” then we should at least be open to the proposition that the universal truths of mathematics are understood by subjective processes.

A Leap Day 2016 Mathematics A To Z: Quaternion

I’ve got another request from Gaurish today. And it’s a word I had been thinking to do anyway. When one looks for mathematical terms starting with ‘q’ this is one that stands out. I’m a little surprised I didn’t do it for last summer’s A To Z. But here it is at last:


I remember the seizing of my imagination the summer I learned imaginary numbers. If we could define a number i, so that i-squared equalled negative 1, and work out arithmetic which made sense out of that, why not do it again? Complex-valued numbers are great. Why not something more? Maybe we could also have some other non-real number. I reached deep into my imagination and picked j as its name. It could be something else. Maybe the logarithm of -1. Maybe the square root of i. Maybe something else. And maybe we could build arithmetic with a whole second other non-real number.

My hopes of this brilliant idea petered out over the summer. It’s easy to imagine a super-complex number, something that’s “1 + 2i + 3j”. And it’s easy to work out adding two super-complex numbers like this together. But multiplying them together? What should i times j be? I couldn’t solve the problem. Also I learned that we didn’t need another number to be the logarithm of -1. It would be π times i. (Or some other numbers. There’s some surprising stuff in logarithms of negative or of complex-valued numbers.) We also don’t need something special to be the square root of i, either. \frac{1}{2}\sqrt{2} + \frac{1}{2}\sqrt{2}\imath will do. (So will another number.) So I shelved the project.

Even if I hadn’t given up, I wouldn’t have invented something. Not along those lines. Finer minds had done the same work and had found a way to do it. The most famous of these is the quaternions. It has a famous discovery. Sir William Rowan Hamilton — the namesake of “Hamiltonian mechanics”, so you already know what a fantastic mind he was — had a flash of insight that’s come down in the folklore and romance of mathematical history. He had the idea on the 16th of October, 1843, while walking with his wife along the Royal Canal, in Dublin, Ireland. While walking across the bridge he saw what was missing. It seems he lacked pencil and paper. He carved it into the bridge:

i^2 = j^2 = k^2 = ijk = -1

The bridge now has a plaque commemorating the moment. You can’t make a sensible system with two non-real numbers. But three? Three works.

And they are a mysterious three! i, j, and k are somehow not the same number. But each of them, multiplied by themselves, gives us -1. And the product of the three is -1. They are even more mysterious. To work sensibly, i times j can’t be the same thing as j times i. Instead, i times j equals minus j times i. And j times k equals minus k times j. And k times i equals minus i times k. We must give up commutivity, the idea that the order in which we multiply things doesn’t matter.

But if we’re willing to accept that the order matters, then quaternions are well-behaved things. We can add and subtract them just as we would think to do if we didn’t know they were strange constructs. If we keep the funny rules about the products of i and j and k straight, then we can multiply them as easily as we multiply polynomials together. We can even divide them. We can do all the things we do with real numbers, only with these odd sets of four real numbers.

The way they look, that pattern of 1 + 2i + 3j + 4k, makes them look a lot like vectors. And we can use them like vectors pointing to stuff in three-dimensional space. It’s not quite a comfortable fit, though. That plain old real number at the start of things seems like it ought to signify something, but it doesn’t. In practice, it doesn’t give us anything that regular old vectors don’t. And vectors allow us to ponder not just three- or maybe four-dimensional spaces, but as many as we need. You might wonder why we need more than four dimensions, even allowing for time. It’s because if we want to track a lot of interacting things, it’s surprisingly useful to put them all into one big vector in a very high-dimension space. It’s hard to draw, but the mathematics is nice. Hamiltonian mechanics, particularly, almost beg for it.

That’s not to call them useless, or even a niche interest. They do some things fantastically well. One of them is rotations. We can represent rotating a point around an arbitrary axis by an arbitrary angle as the multiplication of quaternions. There are many ways to calculate rotations. But if we need to do three-dimensional rotations this is a great one because it’s easy to understand and easier to program. And as you’d imagine, being able to calculate what rotations do is useful in all sorts of applications.

They’ve got good uses in number theory too, as they correspond well to the different ways to solve problems, often polynomials. They’re also popular in group theory. They might be the simplest rings that work like arithmetic but that don’t commute. So they can serve as ways to learn properties of more exotic ring structures.

Knowing of these marvelous exotic creatures of the deep mathematics your imagination might be fired. Can we do this again? Can we make something with, say, four unreal numbers? No, no we can’t. Four won’t work. Nor will five. If we keep going, though, we do hit upon success with seven unreal numbers.

This is a set called the octonions. Hamilton had barely worked out the scheme for quaternions when John T Graves, a friend of his at least up through the 16th of December, 1843, wrote of this new scheme. (Graves didn’t publish before Arthur Cayley did. Cayley’s one of those unspeakably prolific 19th century mathematicians. He has at least 967 papers to his credit. And he was a lawyer doing mathematics on the side for about 250 of those papers. This depresses every mathematician who ponders it these days.)

But where quaternions are peculiar, octonions are really peculiar. Let me call a couple quaternions p, q, and r. p times q might not be the same thing as q times r. But p times the product of q and r will be the same thing as the product of p and q itself times r. This we call associativity. Octonions don’t have that. Let me call a couple quaternions s, t, and u. s times the product of t times u may be either positive or negative the product of s and t times u. (It depends.)

Octonions have some neat mathematical properties. But I don’t know of any general uses for them that are as catchy as understanding rotations. Not rotations in the three-dimensional world, anyway.

Yes, yes, we can go farther still. There’s a construct called “sedenions”, which have fifteen non-real numbers on them. That’s 16 terms in each number. Where octonions are peculiar, sedenions are really peculiar. They work even less like regular old numbers than octonions do. With octonions, at least, when you multiply s by the product of s and t, you get the same number as you would multiplying s by s and then multiplying that by t. Sedenions don’t even offer that shred of normality. Besides being a way to learn about abstract algebra structures I don’t know what they’re used for.

I also don’t know of further exotic terms along this line. It would seem to fit a pattern if there’s some 32-term construct that we can define something like multiplication for. But it would presumably be even less like regular multiplication than sedenion multiplication is. If you want to fiddle about with that please do enjoy yourself. I’d be interested to hear if you turn up anything, but I don’t expect it’ll revolutionize the way I look at numbers. Sorry. But the discovery might be the fun part anyway.

The Equidistribution of Lattice Shapes of Rings: A Friendly Thesis

So, you’re never going to read my doctoral thesis. That’s fair enough. Few people are ever going to, and even the parts that got turned into papers or textbooks are going to attract few readers. They’re meant for people interested in a particular problem that I never expected most people to find interesting. I’m at ease with that. I wrote for an audience I expected knew nearly all the relevant background, and that was used to reading professional, research-level mathematics. But I knew that the non-mathematics-PhDs in my life would look at it and say they only understood the word ‘the’ on the title page. Even if they could understand more, they wouldn’t try.

Dr Piper Alexis Harron tried meeting normal folks halfway. She took her research, and the incredible frustrations of doing that research — doctoral research is hard, and is almost as much an endurance test as anything else — and wrote what she first meant to be “a grenade launched at my ex-prison”. This turned into something more exotic. It’s a thesis “written for those who do not feel that they are encouraged to be themselves”, people who “don’t do math the `right way’ but could still greatly contribute to the community”.

The result is certainly the most interesting mathematical academic writing I’ve run across. It’s written on several levels, including one meant for people who have heard of mathematics certainly but don’t partake in the stuff themselves. It’s exciting reading and I hope you give it a try. It may not be to your tastes — I’m wary of words like ‘laysplanation’ — but it isn’t going to be boring. And Dr Harron’s secondary thesis, that mathematicians need to more aggressively welcome people into the fold, is certainly right.

You’re not missing anything by not reading my thesis. Although my thesis defense offered the amusing sidelight that the campus’s albino squirrel got into the building. My father and some of the grad student friends of mine were trying without anything approaching success to catch it. I’m still not sure why they thought the squirrel needed handling by them.

Elevator Mathematics

My friend ChefMongoose sent a neat little puzzle that came in a dream. I wanted to share it.

So! It’s not often that my dreams give me math puzzles. Here’s one: You are on floor 20 of a hotel. The stairs are blocked.

There are four elevators in front of you with display panels saying ‘3’, ‘4’, ‘7’, and ’35’. They will take you up that many floors, then the number will double. Going down an elevator will take you down that many floors, then the number will halve.

(The dream didn’t tell me what will happen if you can’t halve the number. For good puzzle logic, let’s assume the elevator goes down that much, then breaks.)

There is no basement, the hotel has an infinite amount of floors. Your challenge: get to floor 101. Can it be done?

(And I have no idea if it can be done. But apparently I, Riker, and Worf were trying to do it.)

The puzzle caught my imagination. It so happens the dream set things up so that this is possible: I worked out one path, and ChefMongoose found another.

ChefMongoose was right, of course, that something has to be done about halving the floor steps. I’d thought to make it half of either one less than or one more than the count. That is, going down 7 would turn the elevator into one that goes down either 3 or 4 floors. (My solution turned out not to need either.)

It looks lucky that ChefMongoose, Riker, and Worf picked a set of elevator moves, and rules, and starting, and ending floors that had a solution. Is it, though? Suppose we wanted to get to, say, floor 35? … Well, that’s possible. (Up 7, up 14, down 4, down 2.) 34 obviously follows. (Down 1 more.) 36? (Up 7, up 3, up 6.) 38? (Up 35, down 7, down 3, down 4, down 2, down 1.) The universe of reachable floors is bigger than it might seem at first.

The elevator problem had nagged at me with the thought it was related to some famous mathematical problem. At least that it was a type of one. ChefMongoose worked out what I was thinking of, the Collatz Conjecture. But on further reflection that’s the wrong parallel. This elevator problem is more akin to the McNuggets Problem. (When McDonald’s first sold Chicken McNuggets they were in packs of six, nine, and twenty. So what is the largest number of McNuggets that could not be bought by some combination of packages?) The doubling and halving of floor range makes the problem different, though. I am curious if there are finitely many unreachable floors. I’m also curious whether allowing negative numbers — basement floors — would change what floors are accessible.

The Collatz Conjecture is a fun one. It’s almost a game. Start with a positive whole number. If it’s even, divide it in half. If it’s odd, multiply it by three and add one. Then repeat.

If we start with 1, that’s odd, so we triple it and add one, giving us 4. Even, so cut in half: 2. Even again, so cut in half: 1. That’s going to bring us back to 4.

If we start with 2, we know where this is going. Cut in half: 1. Triple and add one: 4. Cut in half: 2. And repeat.

Starting with 3 suggests something new might happen. Triple 3 and add one: 10. Halve that: 5. Triple and add one: 16. Halve: 8. Halve: 4. Halve: 2. Halve: 1.

4 we’re already a bit sick of at this point. 5 — well, we just worked 5 out. That’ll go 5, 16, 8, 4, 2, 1, etc. Start from 6: we halve it to 3 and then we just worked out 3.

7 jumps right up to 22, then 11, then 34 — what a interesting number there — and then 52, 26, 13, 40, 20, 10 and we’ve seen that routine already. 10, 5, 16, 8, 4, 2, 1.

The Collatz Conjecture is that whatever positive whole number you start from will lead, eventually, to the 4, 2, 1 cycle. It may take a while to get there. I was working the numbers in my head while falling asleep the other night and got to wondering what exactly was 27’s problem anyway. (It takes over a hundred steps to settle down, and gets to numbers as high as 9,232 before finishing.)

Nobody knows whether it’s true. It seems plausible that it might be false. We can imagine a number that doesn’t. At least I can imagine there’s some number, let me call it N, and suppose it’s odd. Then triple that and add one, so we get an even number; halve that maybe a couple times until we get an odd number, and triple that and add one and get back the original N. You might have fun trying out numbers and seeing if you can find a loop like that.

Just do that for fun, though. Mathematicians have tested out every number less than 1,152,921,504,606,846,976. (It’s a round number in binary.) They all end in that 4, 2, 1 cycle. So it seems hard to believe that 1,152,921,504,606,846,977 and onward wouldn’t. We just don’t know that’s so.

If you allow zero, then that’s a valid but very short cycle: 0 halves to 0 and never budges. If you allow negative numbers, then there are at least three more cycles. They start from -1, from -5, and from -17. It’s not known whether there are any more in the negative integers.

The conjecture’s named for Lothar Collatz, 1910 – 1990, a German mathematician who specialized in numerical analysis. That’s the study of how to do calculations that meaningfully reflect the mathematics we would like to know. The Collatz Conjecture is, to the best of my knowledge, a novelty act. I don’t know of any interesting or useful results that depend on it being true (or false). It’s just a question easy to ask and understand, and that we don’t know how to solve. But those are fun to have around too.

Reading The Comics, December 22, 2015: National Mathematics Day Edition

It was a busy week — well, it’s a season for busy weeks, after all — which is why the mathematics comics pile grew so large before I could do anything about it this time around. I’m not sure which I’d pick as my favorite; the Truth Facts tickles me by playing symbols up for confusion and ambiguity, but Quincy is surely the best-drawn of this collection, and good comic strip art deserves attention. Happily that’s a vintage strip from King Features so I feel comfortable including the comic strip for you to see easily.

Tony Murphy’s It’s All About You (December 15), a comic strip about people not being quite couples, tells a “what happens in Vegas” joke themed to mathematics. The particular topic — a “seminar on gap unification theory” — is something that might actually be a mathematics department specialty. The topic appears in number theory, and particularly in the field of partitions, the study of ways to subdivide collections of discrete things. At this point the subject starts getting specialized enough I can’t say very much intelligible about it; apparently there’s a way of studying these divisions by looking at the distances (the gaps) between where divisions are made (the partitions), but my attempts to find a clear explanation for this all turn up papers in number theory journals that I haven’t got access to and that, I confess, would take me a long while to understand. If anyone from the number theory group wanted to explain things I’d be glad to offer the space.

Continue reading “Reading The Comics, December 22, 2015: National Mathematics Day Edition”

Odd Proofs

May 2013 turned out to be an interesting month for number theory, in that there’ve been big breakthroughs on two long-standing conjectures. Number theory is great fun in part because it’s got many conjectures that anybody can understand but which are really hard to prove. The one that’s gotten the most attention, at least from the web sites that I read which dip into popular mathematics, has been the one about twin primes.

It’s long been suspected that there are infinitely many pairs of “twin primes”, such as 5 and 7, or 11 and 13, or 29 and 31, separated by only two. It’s not proven that there are such, not yet. Yitang Zhang of Harvard has announced proof that there are infinitely many pairings of primes that are no more than 70,000,000 apart. This is admittedly not the tightest bound out there, but it’s better than what there was before. But while there are infinitely many primes — anyone can prove that — how many there are in any fixed-width range tends to decrease, and it would be imaginable to think that the distance between primes just keeps increasing, without bounds, the way that (say) each pair of successive powers of two is farther apart than the previous pair were. But it’s not so, and that’s neat to see.

Less publicized is a proof of Goldbach’s Odd Conjecture. Goldbach’s Conjecture is the famous one that every even number bigger than two can be written as the sum of two primes. An equivalent form would be to say that every whole number — even or odd — larger than five can be written as the sum of three primes. Goldbach’s Odd Conjecture cuts the problem by just saying that every odd whole number greater than five can be written as the sum of three primes. And it’s this which Harald Andres Helfgott claims to have a proof for. (He also claims to have a proof that every odd number greater than seven can be written as the sum of three odd primes, that is, that two isn’t needed for more than single-digit odd numbers.)

Continue reading “Odd Proofs”

How I Make Myself Look Foolish

It seems to me that I need to factor numbers more often than most people do. I can’t even attribute this to my being a mathematician, since I don’t think along the lines of anything like mathematical work; I just find that I need to know, say, that 272,250 is what you get by multiplying 2 and 3 to the second power and 5 to the third power and 11 to the second power. And I reliably go to places I know will do calculations quickly, like the desktop Calculator application or what you get from typing mathematical expressions into Google, and find that since the last time I looked they still haven’t added a factorization tool. I have tools I can use, particularly Matlab or its open-source work-just-enough-alike-to-make-swapping-code-difficult replica Octave, which takes a long time to start up for one lousy number.

So I got to thinking: I’ve wanted to learn a bit about writing apps, and surely, writing a factorization app is both easy and quick and would prove I could write something. The routine is easy, too: take a number (272,250) as input; then divide by two as many times as you can (just one, giving 136,125), then divide by three as many times as you can (twice, giving 15,125), then by five as many times as you can (three times, reaching 121), then by seven (you can’t), then eleven (twice, reaching 1), until you’ve run the whole number down. You just need to divide repeatedly by the prime numbers, starting at two, and going up only to the square root of whatever your input number is.

Without bothering to program, then, I thought about how I could make this a more efficient routine. Figuring out more efficient ways to code is good practice, because if you think long enough about how to code efficiently, you can feel satisfied that you would have written a very good program and never bother to actually do it, which would only spoil the beauty of the code anyway. Here’s where the possible inefficiency sets in: how do you know what all the prime numbers up to the square root of whatever you’re interested in is?

Continue reading “How I Make Myself Look Foolish”