Using my A to Z Archives: Hypersphere


Sorry to be late. We discovered baby fish in a tank that we had thought was empty, and that needed some quick attention. The water seems nearly all right and we’re taking measures to get the nitrate back in line. Fish-keeping is a great hobby for someone with tendency towards obsessions and who likes numbers because there is no end of tests you can start running and charts you can start to keep.

So I’ve seen two baby fish, one about the width of a fingernail and one about half that. We’re figuring to keep them inside until they’re large enough not to be accidentally eaten by the bigger goldfish, which means they might just be in there until we move fish inside for the winter. We’ll see.

Back to my archives, though. The hypersphere is a piece from the first A-to-Z I ever did. I could probably write a more complicated essay today. But the hypersphere is a good example of taking a concept familiar, as circles and as spheres, and generalizing it. Looking at what’s particularly interesting in a concept and how it might apply in different contexts. So it’s a good introduction to a useful bit of geometry, yes, but also to a kind of thinking mathematicians do all the time.

Using my A to Z Archives: Hamiltonian


While looking through my past H essays I noticed a typo in Hamiltonian, an essay from the 2019 A-to-Z. Every time I look at an old essay I find a typo, even ones I’ve done this for before. Still, I choose to take it as a sign that this is an auspicious choice.

The Hamiltonian is one of the big important functions of mathematical physics. For all that, I remember being introduced to it, in a Classical Mechanics class, very casually, as though this were just a slightly different Lagrangian. Hamiltonians are very like Lagrangians. Both are rewritings of Newtonian mechanics. They demand more structure, more setup, to use. But they give fine things in trade. So they are worth knowing a bit about.

My All 2020 Mathematics A to Z: Hilbert’s Problems


Beth, author of the popular inspiration blog I Didn’t Have My Glasses On …. proposed this topic. Hilbert’s problems are a famous set of questions. I couldn’t hope to summarize them all in an essay of reasonable length. I’d have trouble to do them justice in a short book. But there are still things to say about them.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Hilbert’s Problems.

It’s easy to describe what Hilbert’s Problems are. David Hilbert, at the 1900 International Congress of Mathematicians, listed ten important problems of the field. In print he expanded this to 23 problems. They covered topics like number theory, group theory, physics, geometry, differential equations, and more. One of the problems was solved that year. Eight of them have been resolved fully. Another nine have been partially answered. Four remain unanswered. Two have generally been regarded as too vague to resolve.

Everyone in mathematics agrees they were big, important questions. Things that represented the things mathematicians of 1900 would most want to know. Things that guided mathematical research for, so far, 120 years.

It does present us with a dilemma. Were Hilbert’s problems listed because he understood what mathematicians would find important? Or did mathematicians find them important because Hilbert listed them? Sadly, mathematicians know of no professionals who have studied questions like this and could offer insight.

There is reason to say that Hilbert’s judgement was good. He listed, for example, the Riemann hypothesis. The hypothesis is still unanswered. Many interesting results would follow from it being proved true, or proved false, or proved unanswerable. Hilbert did not list Fermat’s Last Theorem, unresolved then. Any mathematician would have liked an answer. But nothing of consequence depends on it. But then he also listed making advances in the calculus of variations. A good goal, but not one that requires particular insight to want.

So here is a related problem. Why hasn’t anyone else made such a list? A concise summary of the problems that guides mathematical research?

It’s not because no one tried. At the 1912 International Conference of Mathematicians, Edmund Landau identified four problems in number theory worth solving. None of them have been solved yet. Yutaka Taniyama listed three dozen problems in 1955. William Thurston put forth 24 questions in 1982. Stephen Smale, famous for work in chaos theory, gathered a list of 18 questions in 1998. Barry Simon offered fifteen of them in 2000. Also in 2000 the Clay Mathematics Institute put up seven problems, with a million-dollar bounty on each. Jair Minoro Abe and Shotaro Tanaka gathered 22 questions for a list for 2001. The United States Defense Advanced Research Projects Agency put out a list of 23 of them in 2007.

Apart from Smale’s and the Clay Mathematics lists I never heard of any of them either. Why not? What was special about Hilbert’s list?

For one, he was David Hilbert. Hilbert was a great mathematician, held in high esteem then and now. Besides his list of problems he’s known for the axiomatization of geometry. This built not just logical rigor but a new, formalist, perspective. Also, he’s known for the formalist approach to mathematics. In this, for example, we give up the annoyingly hard task of saying exactly what we mean by a point and a line and a plane. We instead talk about how points and lines and planes relate to each other, definitions we can give. He’s also known for general relativity: Hilbert and Albert Einstein developed its field equations at the same time. We have Hilbert spaces and Hilbert curves and Hilbert metrics and Hilbert polynomials. Fans of pop mathematics speak of the Hilbert Hotel, a structure with infinitely many rooms and used to explore infinitely large sets.

So he was a great mind, well-versed in many fields. And he was in an enviable position, professor of mathematics at the University of Göttingen. At the time, German mathematics was held in particularly high renown. When you see, for example, mathematicians using ‘Z’ as shorthand for ‘integers’? You are seeing a thing that makes sense in German. (It’s for “Zahlen”, meaning the counting numbers.) Göttingen was at the top of German mathematics, and would be until the Nazi purges of academia. It would be hard to find a more renowned position.

And he was speaking at a great moment. The transition from one century to another is a good one for ambitious projects and declarations to be remembered. But the International Congress of Mathematicians was of particular importance. This was only the second meeting of the International Congress of Mathematicians. International Congresses of anything were new in the late 19th century. Many fields — not only mathematics — were asserting their professionalism at the time. It’s when we start to see professional organizations for specific subjects, not just “Science”. It’s when (American) colleges begin offering elective majors for their undergraduates. When they begin offering PhD degrees.

So it was a field when mathematics, like many fields (and nations), hoped to define its institutional prestige. Having an ambitious goal is one way to define that.

It was also an era when mathematicians were thinking seriously about what the field was about. The results were mixed. In the last decades of the 19th century, mathematicians had put differential calculus on a sound logical footing. But then found strange things in, for example, mathematical physics. Boltzmann’s H-theorem (1872) tells us that entropy in a system of particles always increases. Poincaré’s recurrence theorem (1890) tells us a system of particles has to, eventually, return to its original condition. (Or to something close enough.) And therefore it returns to its original entropy, undoing any increase. Both are sound theorems; how can they not conflict?

Even ancient mathematics had new uncertainty. In 1882 Moritz Pasch discovered that Euclid, and everyone doing plane geometry since then, had been using an axiom no one had acknowledged. (If a line that doesn’t pass through any vertex of a triangle intersects one leg of the triangle, then it also meets one other leg of the triangle.) It’s a small and obvious thing. But if everyone had missed it for thousands of years, what else might be overlooked?

I wish now to share my interpretation of this background. And with it my speculations about why we care about Hilbert’s Problems and not about Thurston’s. And I wish to emphasize that, whatever my pretensions, I am not a professional historian of mathematics. I am an amateur and my training consists of “have read some books about a subject of interest”.

By 1900 mathematicians wanted the prestige and credibility and status of professional organizations. Who would not? But they were also aware the foundation of mathematics was not as rigorous as they had thought. It was not yet the “crisis of foundations” that would drive the philosophy of mathematics in the early 20th century. But the prelude to the crisis was there. And here was a universally respected figure, from the most prestigious mathematical institution. He spoke to all the best mathematicians in a way they could never have been addressed before. And presented a compelling list of tasks to do. These were good tasks, challenging tasks. Many of these tasks seemed doable. One was even done almost right away.

And they covered a broad spectrum of mathematics of the time. Everyone saw at least one problem relevant to their field, or to something close to their field. Landau’s problems, posed twelve years later, were all about number theory. Not even all number theory; about prime numbers. That’s nice, but it will only briefly stir the ambitions of the geometer or the mathematical physicist or the logician.

By the time of Taniyama, though? 1955? Times are changed. Taniyama is no inconsiderable figure. The Taniyama-Shimura theorem is a major piece of elliptic functions. It’s how we have a proof of Fermat’s last theorem. But by then, too, mathematics is not so insecure. We have several good ideas of what mathematics is and why it should work. It has prestige and institutional authority. It has enough Congresses and Associations and Meetings that no one can attend them all. It’s moreso by 1982, when William Thurston set up questions. I know that I’m aware of Stephen Smale’s list because I was a teenager during the great fractals boom of the 80s and knew Smale’s name. Also that he published his list near the time I finished my quals. Quals are an important step in pursuing a doctorate. After them you look for a specific thesis problem. I was primed to hear about great ambitious projects I could not possibly complete.

Only the Clay Mathematics Institute’s list has stood out, aided by its catchy name of Millennium Prizes and its offer of quite a lot of money. That’s a good memory aid. Any lay reader can understand that motivation. Two of the Millennium Prize problems were also Hilbert’s problems. One in whole (the Riemann hypothesis again). One in part (one about solutions to elliptic curves). And as the name states, it came out in 2000. It was a year when many organizations were trying to declare bold and fresh new starts for a century they hoped would be happier than the one before. This, too, helps the memory. Who has any strong associations with 1982 who wasn’t born or got their driver’s license that year?

These are my suppositions, though. I could be giving a too-complicated answer. It’s easy to remember that United States President John F Kennedy challenged the nation to land a man on the moon by the end of the decade. Space enthusiasts, wanting something they respect to happen in space, sometimes long for a president to make a similar strong declaration of an ambitious goal and specific deadline. President Ronald Reagan in 1984 declared there would be a United States space station by 1992. In 1986 he declared there would be by 2000 a National Aerospace Plane, capable of flying from Washington to Tokyo in two hours. President George H W Bush in 1989 declared there would be humans on the Moon “to stay” by 2010 and to Mars thereafter. President George W Bush in 2004 declared the Vision for Space Exploration, bringing humans to the moon again by 2020 and to Mars thereafter.

No one has cared about any of these plans. Possibly because the first time a thing is done, it has a power no repetition can claim. But also perhaps because the first attempt succeeded. Which was not due only to its being first, of course, but to the factors that made its goal important to a great number of people for long enough that it succeeded.

Which brings us back to the Euthyphro-like dilemma of Hilbert’s Problems. Are they influential because Hilbert chose well, or did Hlbert’s choosing them make them influential? I suspect this is a problem that cannot be resolved.


Thank you for reading. This and the other other A-to-Z topics for 2020 should be at this link. All my essays for this and past A-to-Z sequences are at this link. And I am taking nominations for J, K, and L topics. I’m grateful for anything you can offer me.

How July 2020 Showed People are Getting OK With Less Comics Here


I’d like to once again take a short look at my readership figures, this time for July 2020. All my projects start out trying to be short and then they billow out to 2,500 words. I don’t know.

I posted 18 things in July. This is above what I do outside A-to-Z months, even without the Reading the Comics posts. There were 1,560 page views in July, which is a higher total than June offered. It’s below the twelve-month running average of 2,323.2 views per month. That stretch includes the anomalously high October 2019 figure, though. Take that out, my page view average was 1746.5, so I’m getting a better sense of how much people want to see me explain comic strips.

There were 1,005 unique visitors here in July. I’m always glad to see that above the 1,000-person mark. The twelve-month running average was 1,579.0 unique visitors, which is a bit higher. That includes the big October 2019 surge, though. Take that out and the running average was 1,144.2 unique visitors, closer to where I did end up.

This is dangerous to observe, but the median page view count the previous twelve months was 1,741; the median unique visitors count was 1,130. Medians are less vulnerable to extremes in a sample (extreme highs or lows), so maybe that’s a guide to whether the month saw readership grow or what. I’ll keep this up until I have no clear answers yet.

Bar chart of monthly readership for about two and a half years. There's a ridiculously high peak at October 2019. The monthly readership and unique visitors rose in July after a big drop for June 2020.
Oh, yeah, I don’t know what this offer about earning money is supposed to be but I also know anyone talking about earning money blogging is pulling a scam.

There were 74 things liked in July, above the running average of 60.3. There were 26 comments, comfortably above the running average of 16.3. A-to-Z months have an advantage in comments, certainly.

Rated per posting, the views and visitors were less good. 86.7 views per posting, well below the mean of 129.2. 55.8 unique visitors per posting, below the 87.2 average. But, then, 4.1 likes per posting, above the 3.5 average. And 1.4 comments per posting, above the 1.0 running average.

I want to start looking at just the five most popular posts of the month gone by. That got foiled when three posts all tied for the fifth-most-popular posting. Well, I can deal. The most popular things posted this past month were:

I started the month with 1,498 posts, that have gathered altogether 109,307 views from a logged 60,842 unique visitors.

I published 11,220 words in July, even though so many of my posts were just heads-up to older pieces. It works out to an average 863.1 words per posting in July. My words per post for the year 2020, so far, has dropped to 663. It had been 672 the month before.

If you’d like to be a regular reader, please use the “Follow Nebusresearch” button on this page. Or add my RSS feed to whatever reader you have. If you lack an RSS reader, get a free account at Livejournal or Dreamwidth: their Friends pages can load RSS feeds from whatever source. (Also, if you see any blog you like, try adding /rss or /feed or /rss+xml or /rss+atom to the end of its URL. One of these will often work.) The automated announcement of posts on my Twitter account of @nebusj although that’s been going through a stretch of not letting me on again. I’m trying to hang out more on the mathematics-themed Mathstodon account at @nebusj@mathstodon.xyz. Thank you for reading.

Using my A to Z Archives: Gaussian Primes


I’d like today to share a piece from 2017. Gaussian Primes are a fun topic, as they’re one of those things that steps into group theory without being too abstract. And they show how we can abstract a familiar enough idea — here, prime numbers — into something that applies in new contexts. In this case, in complex numbers, which are looking likely to be the running theme for this year’s A-to-Z.

Later in 2017 I talked talk about prime numbers in general, and how “prime” isn’t an idea that exists in the number itself. It exists in the number and the kind of number and how multiplication works for that kind of number.


And I’m still eagerly taking nominations for topics for J, K, or L Please leave a comment at this link. Thank you.

Using my A to Z Archives: Grammar


If you looked at my appeal for A-to-Z topics for the letter G, when I posted it a couple weeks back, you maybe looked over a bunch of essays I quite liked. I still do; G has been a pretty good letter for me. So one of the archive pieces I’d like to bring back to attention is Grammar, from the Leap Day 2016 A-to-Z. It’s about how we study how to make mathematical systems. That you can form theorems about the mechanism for forming theorems is a wild discovery, and the subject can be hard to understand. At least some of its basic principles are accessible, I hope.

And if you’d like me to discuss more topics in mathematical logic, or other fields of mathematics that start with J, K, or L, please leave a comment at this link. Thank you.

I’m looking for J, K, and L topics for the All 2020 A-to-Z


As the subject line says, I’m looking at what the next couple of letters should be for my 2020 A-to-Z. Please put in a comment here with something you think it’d be interesting to see me explain. I’m up for most any topic with some mathematical connection, including biographies.

Please if you suggest something, let me know of any project that you have going on. I’m happy to share links to other blogs, teaching projects, YouTube channels, or whatever else you have going on that’s worth sharing.

I am open to revisiting a subject from past years, if I think I could do a better piece on it. Topics I’ve already covered, starting with the letter ‘J’, are:


Topics I’ve already covered, starting with the letter ‘K’, are:


Topics I’ve already covered, starting with the letter ‘L’, are:


The essays for my All 2020 Mathematics A to Z are at this link. Posts from all of the A-to-Z posts, this year and previous years, are at this link.

My All 2020 Mathematics A to Z: J Willard Gibbs


Charles Merritt sugested a biographical subject for G. (There are often running themes in an A-to-Z and this year’s seems to be “biography”.) I don’t know of a web site or other project that Merritt has that’s worth sharing, but if I learn of it, I’ll pass it along.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

J Willard Gibbs.

My love and I, like many people, tried last week to see the comet NEOWISE. It took several attempts. When finally we had binoculars and dark enough sky we still had the challenge of where to look. Finally determined searching and peripheral vision (which is more sensitive to faint objects) found the comet. But how to guide the other to a thing barely visible except with binoculars? Between the silhouettes of trees and a convenient pair of guide stars we were able to put the comet’s approximate location in words. Soon we were experts at finding it. We could turn a head, hold up the binoculars, and see a blue-ish puff of something.

To perceive a thing is not to see it. Astronomy is full of things seen but not recognized as important. There is a great need for people who can describe to us how to see a thing. And this is part of the significance of J Willard Gibbs.

American science, in the 19th century, had an inferiority complex compared to European science. Fairly, to an extent: what great thinkers did the United States have to compare to William Thompson or Joseph Fourier or James Clerk Maxwell? The United States tried to argue that its thinkers were more practical minded, with Joseph Henry as example. Without downplaying Henry’s work, though? The stories of his meeting the great minds of Europe are about how he could fix gear that Michael Faraday could not. There is a genius in this, yes. But we are more impressed by magnetic fields than by any electromagnet.

Gibbs is the era’s exception, a mathematical physicist of rare insight and creativity. In his ability to understand problems, yes. But also in organizing ways to look at problems so others can understand them better. A good comparison is to Richard Feynman, who understood a great variety of problems, and organized them for other people to understand. No one, then or now, doubted Gibbs compared well to the best European minds.

Gibbs’s life story is almost the type case for a quiet academic life. He was born into an academic/ministerial family. Attended Yale. Earned what appears to be the first PhD in engineering granted in the United States, and only the fifth non-honorary PhD in the country. Went to Europe for three years, then came back home, got a position teaching at Yale, and never left again. He was appointed Professor of Mathematical Physics, the first such in the country, at age 32 and before he had even published anything. This speaks of how well-connected his family was. Also that he was well-off enough not to need a salary. He wouldn’t take one until 1880, when Yale offered him two thousand per year against Johns Hopkins’s three.

Between taking his job and taking his salary, Gibbs took time to remake physics. This was in thermodynamics, possibly the most vibrant field of 19th century physics. The wonder and excitement we see in quantum mechanics resided in thermodynamics back then. Though with the difference that people with a lot of money were quite interested in the field’s results. These were people who owned railroads, or factories, or traction companies. Extremely practical fields.

What Gibbs offered was space, particularly, phase space. Phase space describes the state of a system as a point in … space. The evolution of a system is typically a path winding through space. Constraints, like the conservation of energy, we can usually understand as fixing the system to a surface in phase space. Phase space can be as simple as “the positions and momentums of every particle”, and that often is what we use. It doesn’t need to be, though. Gibbs put out diagrams where the coordinates were things like temperature or pressure or entropy or energy. Looking at these can let one understand a thermodynamic system. They use our geometric sense much the same way that charts of high- and low-pressure fronts let one understand the weather. James Clerk Maxwell, famous for electromagnetism, was so taken by this he created plaster models of the described surface.

This is, you might imagine, pretty serious, heady stuff. So you get why Gibbs published it in the Transactions of the Connecticut Academy: his brother-in-law was the editor. It did not give the journal lasting fame. It gave his brother-in-law a heightened typesetting bill, and Yale faculty and New Haven businessmen donated funds.

Which gets to the less-happy parts of Gibbs’s career. (I started out with ‘less pleasant’ but it’s hard to spot an actually unpleasant part of his career.) This work sank without a trace, despite Maxwell’s enthusiasm. It emerged only in the middle of the 20th century, as physicists came to understand their field as an expression of geometry.

That’s all right. Chemists understood the value of Gibbs’s thermodynamics work. He introduced the enthalpy, an important thing that nobody with less than a Master’s degree in Physics feels they understand. Changes of enthalpy describe how heat transfers. And the Gibbs Free Energy, which measures how much reversible work a system can do if the temperature and pressure stay constant. A chemical reaction where the Gibbs free energy is negative will happen spontaneously. If the system’s in equilibrium, the Gibbs free energy won’t change. (I need to say the Gibbs free energy as there’s a different quantity, the Helmholtz free energy, that’s also important but not the same thing.) And, from this, the phase rule. That describes how many independently-controllable variables you can see in mixing substances.

In the 1880s Gibbs worked on something which exploded through physics and mathematics. This was vectors. He didn’t create them from nothing. Hermann Günter Grassmann — whose fascinating and frustrating career I hadn’t known of before this — laid much of the foundation. Building on Grassman and W K Clifford, though, let Gibbs present vectors as we now use them in physics. How to define dot products and cross products. How to use them to simplify physics problems. How they’re less work than quaternions are. Gibbs was not the only person to recast physics in vector form. Oliver Heaviside is another important mathematical physicist of the time who did. But Gibbs identified the tools extremely well. You can read his Elements of Vector Analysis. It’s not very different from what a modern author would write on the subject. It’s terser than I would write, but terse is also respectful of someone’s time and ability to reason out explanations of small points.

There are more pieces. They don’t all fit in a neat linear timeline; nobody’s life really does. Gibbs’s thermodynamics work, leading into statistical mechanics, foreshadows much of quantum mechanics. He’s famous for the Gibbs Paradox, which concerns the entropy of mixing together two different kinds of gas. Why is this different from mixing together two containers of the same kind of gas? And the answer is that we have to think more carefully about what we mean by entropy, and about the differences between containers.

There is a Gibbs phenomenon, known to anyone studying Fourier series. The Fourier series is a sum of sine and cosine functions. It approximates an arbitrary original function. The series is a continuous function; you could draw it without lifting your pen. If the original function has a jump, though? A spot where you have to lift your pen? The Fourier series for that represents the jump with a region where its quite-good approximation suddenly turns bad. It wobbles around the ‘correct’ values near the jump. Using more terms in the series doesn’t make the wobbling shrink. Gibbs described it, in studying sawtooth waves. As it happens, Henry Wilbraham first noticed and described this in 1848. But Wilbraham’s work went unnoticed until after Gibbs’s rediscovery.

And then there was a bit in which Gibbs was intrigued by a comet that prolific comet-spotter Lewis Swift observed in 1880. Finding the orbit of a thing from a handful of observations is one of the great problems of astronomical mathematics. Karl Friedrich Gauss started the 19th century with his work projecting the orbit of the newly-discovered and rapidly-lost asteroid Ceres. Gibbs put his vector notation to the work of calculating orbits. His technique, I am told by people who seem to know, is less difficult and more numerically stable than was earlier used.

Swift’s comet of 1880, it turns out, was spotted in 1869 by Wilhelm Tempel. It was lost after its 1908 perihelion. Comets have a nasty habit of changing their orbits on us. But it was rediscovered in 2001 by the Lincoln Near-Earth Asteroid Research program. It’s next to reach perihelion the 26th of November, 2020. You might get to see this, another thing touched by J Willard Gibbs.


This and the other other A-to-Z topics for 2020 should be at this link. All my essays for this and past A-to-Z sequences are at this link. I’ll soon be opening f or topics for J, K, and L, essays also. Thanks for reading.

Using my A to Z Archives: Fractions (continued)


There are important pieces of mathematics. Anyone claiming that differential equations are a niche interest is lying to you. And then there are niche interests. These are worthwhile fields. It’s just you can get a good well-rounded mathematical education while being only a little aware of them. And things can move from being important to niche, or back again.

Continued fractions are one of those things I had understood to have fallen from importance. They had a vogue, in Western mathematics, where they do some problems pretty neatly and cleverly. But they’re discussed more rarely these days. The speculation I’ve seen is that they don’t quite have a logical place, as being a little too hard when you’re learning fractions but seeming too easy when you’re learning infinite series, that sort of thing. My experience, it turns out, was not universal, and that’s an exciting thing to learn in the comments.

You have under a week to grab some free e-Books from Springer


Springer released a bunch of its e-Books for free, as one of those gestures of corporate-brand goodwill for the pandemic. The free-download period is slated to end the 31st of July. So if you have any interest in academic books, take a little while and read over the options.

Harish Narayanan kindly improved on Springer’s organization scheme by grouping them by subject and ordering them by title. Also adding their Goodread ratings, where available. You can find that grouping here, in a table easy to search. The categories are not perfect — Roman Kossak’s Mathematical Logic is grouped under “Religion and Philosophy”, for example — but it’s a good starting point. And all it’ll cost is download time.

I do not know whether this is region-limited, but if it is, it is limited in the most annoying and foolish way possible.

Using my A to Z Archives: Fourier series


My impression, not checked against evidence, is that my recaps here feature the 2019 series more than any other. Well, I really liked the 2019 series. I don’t think that’s just recentism. On rereading them, I often feel little pleasant surprises along the way. That’s a good feeling.

So here was my ‘F’ entry for 2019: Fourier series. They’re important. They’re built out of easy pieces, though. And they’re full of weird bits. You can understand why someone would spend a career studying them. And I almost give enough information to actually use the things, if you have enough background to understand how to use them. I like hitting that sweet spot.

My All 2020 Mathematics A to Z: Fibonacci


Dina Yagodich suggested today’s A-to-Z topic. I thought a quick little biography piece would be a nice change of pace. I discovered things were more interesting than that.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Fibonacci.

I realized preparing for this that I have never read a biography of Fibonacci. This is hardly unique to Fibonacci. Mathematicians buy into the legend that mathematics is independent of human creation. So the people who describe it are of lower importance. They learn a handful of romantic tales or good stories. In this way they are much like humans. I know at least a loose sketch of many mathematicians. But Fibonacci is a hard one for biography. Here, I draw heavily on the book Fibonacci, his numbers and his rabbits, by Andriy Drozdyuk and Denys Drozdyuk.

We know, for example, that Fibonacci lived until at least 1240. This because in 1240 Pisa awarded him an annual salary in recognition of his public service. We think he was born around 1170, and died … sometime after 1240. This seems like a dismal historical record. But, for the time, for a person of slight political or military importance? That’s about as good as we could hope for. It is hard to appreciate how much documentation we have of lives now, and how recent a phenomenon that is.

Even a fact like “he was alive in the year 1240” evaporates under study. Italian cities, then as now, based the year on the time since the notional birth of Christ. Pisa, as was common, used the notional conception of Christ, on the 25th of March, as the new year. But we have a problem of standards. Should we count the year as the number of full years since the notional conception of Christ? Or as the number of full and partial years since that important 25th of March?

If the question seems confusing and perhaps angering let me try to clarify. Would you say that the notional birth of Christ that first 25th of December of the Christian Era happened in the year zero or in the year one? (Pretend there was a year zero. You already pretend there was a year one AD.) Pisa of Leonardo’s time would have said the year one. Florence would have said the year zero, if they knew of “zero”. Florence matters because when Florence took over Pisa, they changed Pisa’s dating system. Sometime later Pisa changed back. And back again. Historians writing, aware of the Pisan 1240 on the document, may have corrected it to the Florence-style 1241. Or, aware of the change of the calendar and not aware that their source already accounted for it, redated it 1242. Or tried to re-correct it back and made things worse.

This is not a problem unique to Leonardo. Different parts of Europe, at the time, had different notions for the year count. Some also had different notions for what New Year’s Day would be. There were many challenges to long-distance travel and commerce in the time. Not the least is that the same sun might shine on at least three different years at once.

We call him Fibonacci. Did he? The question defies a quick answer. His given name was Leonardo, and he came from Pisa, so a reliable way to address him would have “Leonardo of Pisa”, albeit in Italian. He was born into the Bonacci family. He did in some manuscripts describe himself as “Leonardo filio Bonacci Pisano”, give or take a few letters. My understanding is you can get a good fun quarrel going among scholars of this era by asking whether “Filio Bonacci” would mean “the son of Bonacci” or “of the family Bonacci”. Either is as good for us. It’s tempting to imagine the “Filio” being shrunk to “Fi” and the two words smashed together. But that doesn’t quite say that Leonardo did that smashing together.

Nor, exactly, when it did happen. We see “Fibonacci” used in mathematical works in the 19th century, followed shortly by attempts to explain what it means. We know of a 1506 manuscript identifying Leonardo as Fibonacci. But there remains a lot of unexplored territory.

Photograph of a Californian rabbit --- a small white rabbit with dark ears and musty grey snout --- sitting up in a cage as far from the camera as possible.
Penelope the rabbit is very happy to meet us!

If one knows one thing about Fibonacci though, one knows about the rabbits. They give birth to more rabbits and to the Fibonacci Sequence. More on that to come. If one knows two things about Fibonacci, the other is about his introducing Arabic numerals to western mathematics. I’ve written of this before. And the subject is … more ambiguous, again.

Most of what we “know” of Fibonacci’s life is some words he wrote to explain why he was writing his bigger works. If we trust he was not creating a pleasant story for the sake of engaging readers, then we can finally say something. (If one knows three things about Fibonacci, and then five things, and then eight, one is making a joke.)

Fibonacci’s father was, in the 1290s, posted to Bejaia, a port city on the Algerian coast. The father did something for Pisa’s duana there. And what is a duana? … Again, certainty evaporates. We have settled on saying it’s a customs house, and suppose our readers know what goes on in a customs house. The duana had something to do with clearing trade through the port. His father’s post was as a scribe. He was likely responsible for collecting duties and registering accounts and keeping books and all that. We don’t know how long Fibonacci spent there. “Some days”, during which he alleges he learned the digits 1 through 9. And after that, travelling around the Mediterranean, he saw why this system was good, and useful. He wrote books to explain it all and convince Europe that while Roman numerals were great, Arabic numerals were more practical.

It is always dangerous to write about “the first” person to do anything. Except for Yuri Gagarin, Alexei Leonov, and Neil Armstrong, “the first” to do anything dissolves into ambiguity. Gerbert, who would become Pope Sylvester II, described Arabic numerals (other than zero) by the end of the 10th century. He added in how this system along with the abacus made computation easier. Arabic numerals appear in the Codex Conciliorum Albeldensis seu Vigilanus, written in 976 AD in Spain. And it is not as though Fibonacci was the first European to travel to a land with Arabic numerals, or the first perceptive enough to see their value.

Allow that, though. Every invention has precursors, some so close that it takes great thinking to come up with a reason to ignore them. There must be some credit given to the person who gathers an idea into a coherent, well-explained whole. And with Fibonacci, and his famous manuscript of 1202, the Liber Abaci, we have … more frustration.

It’s not that Liber Abaci does not exist, or that it does not have what we credit it for having. We don’t have any copies of the 1202 edition, but we do have a 1228 manuscript, at least, and that starts out with the Arabic numeral system. And why this system is so good, and how to use it. It should convince anyone who reads it.

If anyone read it. We know of about fifteen manuscripts of Liber Abaci, only two of them reasonably complete. This seems sparse even for manuscripts in the days they had to be hand-copied. This until you learn that Baldassarre Boncompagni published the first known printed version in 1857. In print, in Italian, it took up 459 pages of text. Its first English translation, published by Laurence E Sigler in 2002(!) takes up 636 pages (!!). Suddenly it’s amazing that as many as two complete manuscripts survive. (Wikipedia claims three complete versions from the 13th and 14th centuries exist. And says there are about nineteen partial manuscripts with another nine incomplete copies. I do not explain this discrepancy.)

He had other books. The Liber Quadratorum, for example, a book about algebra. Wikipedia seems to say we have it through a single manuscript, copied in the 15th century. Practica Geometriae, translated from Latin in 1442 at least. A couple other now-lost manuscripts. A couple pieces about specific problems.

So perhaps only a handful of people read Fibonacci. Ah, but if they were the right people? He could have been a mathematical Velvet Underground, read by a hundred people, each of whom founded a new mathematics.

We could trace those hundred readers by the first thing anyone knows Fibonacci for. His rabbits, breeding in ways that rabbits do not, and the sequence of whole numbers those provide. Fibonacci did not discover this sequence. You knew that. Nothing in mathematics gets named for its discoverer. Virahanka, an Indian mathematician who lived somewhere between the sixth and eighth centuries, described the sequence exactly. Gopala, writing sometime in the 1130s, expanded on this.

Photograph of a Californian rabbit --- white, with black ears and a skeptical-looking red eye --- lying on the other side of the pen cage from a Flemish giant --- a large yellow-brown rabbit. They are not actually nose-to-nose, but look close to it, and we see the profile of both rabbits.
Penelope and Sunshine have a moment of togetherness before they decide they want less togetherness.

This is not to say Fibonacci copied any of these (and more) Indian mathematicians. The world is large and manuscripts are hard to read. The sequence can be re-invented by anyone bored in the right way. Ah, but think of those who learned of the sequence and used it later on, following Fibonacci’s lead. For example, in 1611 Johannes Kepler wrote a piece that described Fibonacci’s sequence. But that does not name Fibonacci. He mentions other mathematicians, ancient and contemporary. The easiest supposition is he did not know he was writing something already seen. In 1844, Gabriel Lamé used Fibonacci numbers in studying algorithm complexity. He did not name Fibonacci either, though. (Lamé is famous today for making some progress on Fermat’s last theorem. He’s renowned for work in differential equations and on ellipse-like curves. If you have thought what a neat weird shape the equation x^4 + y^4 = 1 can describe you have tread in Lamé’s path.)

Things picked up for Fibonacci’s reputation in 1876, thanks to Édouard Lucas. (Lucas is notable for other things. Normal people might find interesting that he proved by hand the number 2^{127} - 1 was prime. This seems to be the largest prime number ever proven by hand. He also created the Tower of Hanoi problem.) In January of 1876, Lucas wrote about the Fibonacci sequence, describing it as “the series of Lamé”. By May, though in writing about prime numbers, he has read Boncompagni’s publications. He says how this thing “commonly known as the sequence of Lamé was first presented by Fibonacci”.

And Fibonacci caught Lucas’s imagination. Lucas shared, particularly, the phrasing of this sequence as something in the reproduction of rabbits. This captured mathematicians’, and then people’s imaginations. It’s akin to Émile Borel’s room of a million typing monkeys. By the end of the 19th century Leonardo of Pisa had both a name and fame.

We can still ask why. The proximate cause is Édouard Lucas, impressed (I trust) by Boncompagni’s editions of Fibonacci’s work. Why did Baldassarre Boncompagni think it important to publish editions of Fibonacci? Well, he was interested in the history of science. He edited the first Italian journal dedicated to the history of mathematics. He may have understood that Fibonacci was, if not an important mathematician, at least one who had interesting things to write. Boncompagni’s edition of Liber Abaci came out in 1857. By 1859 the state of Tuscany voted to erect a statue.

So I speculate, without confirming that at least some of Fibonacci’s good name in the 19th century was a reflection of Italian unification. The search for great scholars whose intellectual achievements could reflect well on a nation trying to build itself.

And so we have bundles of ironies. Fibonacci did write impressive works of great mathematical insight. And he was recognized at the time for that work. The things he wrote about Arabic numerals were correct. His recommendation to use them was taken, but by people who did not read his advice. After centuries of obscurity he got some notice. And a problem he did not create nor particularly advance brought him a fame that’s lasted a century and a half now, and looks likely to continue.

I am always amazed to learn there are people not interested in history.


And now I can try to get ahead of deadline for next week. This and all my other A-to-Z topics for the year should be at this link. All my essays for this and past A-to-Z sequences are at this link. And I am still taking topics to discuss in the coming weeks. Thank you for reading and please take care.

Using my A to Z Archives: Encryption schemes


If it does turn out that P equals NP we would, at least in principle, have wrecked encryption as we know it. So let me take this chance to mention my essay on Encryption Schemes, from last year’s A-to-Z. And that discusses some of what we look for in encryption, which includes both secrecy and error-free transmission.

In Our Time podcast repeats episode on P versus NP


The BBC’s general-discussion podcast In Our Time repeated another mathematics-themed session this week. The topic is P versus NP, a matter of computational complexity. P and NP here are shorthands to describe the amount of work needed to follow a procedure. And, particularly, how the amount of work changes as the size of the problem being worked on changes. We know there are problems of a complexity type P, and problems of a complexity type NP. What’s not known is whether those are, actually, the same, whether there’s a clever way of describing an NP problem so we can solve it with a P approach.

I do not remember whether I heard this program when it originally broadcast in 2015. And I haven’t had time to listen to this one yet. But these discussions are usually prett solid, and will get to discussing the ambiguities and limitations and qualifications of the field. So I feel comfortable recommending it even without a recent listen, which I will likely get to sometime during this week’s walks.

Using my A to Z Archives: e


There’s one past A-to-Z essay for the letter e that’s compelling after I looked at the exponential function on Thursday. That would be the number that’s the base of the natural logarithm. It’s a number that was barely mentioned in that piece, because I ended up not needing it.

But a couple years ago I wrote a piece that was all e, including points like how curious a number it is. I hope that you enjoy that piece too.

My All 2020 Mathematics A to Z: Exponential


GoldenOj suggested the exponential as a topic. It seemed like a good important topic, but one that was already well-explored by other people. Then I realized I could spend time thinking about something which had bothered me.

In here I write about “the” exponential, which is a bit like writing about “the” multiplication. We can talk about 2^3 and 10^2 and many other such exponential functions. One secret of algebra, not appreciated until calculus (or later), is that all these different functions are a single family. Understanding one exponential function lets you understand them all. Mathematicians pick one, the exponential with base e, because we find that convenient. e itself isn’t a convenient number — it’s a bit over 2.718 — but it has some wonderful properties. When I write “the exponential” here, I am looking at this function where we look at e^{t} .

This piece will have a bit more mathematics, as in equations, than usual. If you like me writing about mathematics more than reading equations, you’re hardly alone. I recommend letting your eyes drop to the next sentence, or at least the next sentence that makes sense. You should be fine.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Exponential.

My professor for real analysis, in grad school, gave us one of those brilliant projects. Starting from the definition of the logarithm, as an integral, prove at least thirty things. They could be as trivial as “the log of 1 is 0”. They could be as subtle as how to calculate the log of one number in a different base. It was a great project for testing what we knew about why calculus works.

And it gives me the structure to write about the exponential function. Anyone reading a pop-mathematics blog about exponentials knows them. They’re these functions that, as the independent variable grows, grow ever-faster. Or that decay asymptotically to zero. Some readers know that, if the independent variable is an imaginary number, the exponential is a complex number too. As the independent variable grows, becoming a bigger imaginary number, the exponential doesn’t grow. It oscillates, a sine wave.

That’s weird. I’d like to see why that makes sense.

To say “why” this makes sense is doomed. It’s like explaining “why” 36 is divisible by three and six and nine but not eight. It follows from what the words we have mean. The “why” I’ll offer is reasons why this strange behavior is plausible. It’ll be a mix of deductive reasoning and heuristics. This is a common blend when trying to understand why a result happens, or why we should accept it.

I’ll start with the definition of the logarithm, as used in real analysis. The natural logarithm, if you’re curious. It has a lot of nice properties. You can use this to prove over thirty things. Here it is:

log\left(x\right) = \int_{1}^{x} \frac{1}{s} ds

The “s” is a dummy variable. You’ll never see it in actual use.

So now let me summon into existence a new function. I want to call it g. This is because I’ve worked this out before and I want to label something else as f. There is something coming ahead that’s a bit of a syntactic mess. This is the best way around it that I can find.

g(x) = \frac{1}{c} \int_{1}^{x} \frac{1}{s} ds

Here, ‘c’ is a constant. It might be real. It might be imaginary. It might be complex. I’m using ‘c’ rather than ‘a’ or ‘b’ so that I can later on play with possibilities.

So the alert reader noticed that g(x) here means “take the logarithm of x, and divide it by a constant”. So it does. I’ll need two things built off of g(x), though. The first is its derivative. That’s taken with respect to x, the only variable. Finding the derivative of an integral sounds intimidating but, happy to say, we have a theorem to make this easy. It’s the Fundamental Theorem of Calculus, and it tells us:

g'(x) = \frac{1}{c}\cdot\frac{1}{x}

We can use the ‘ to denote “first derivative” if a function has only one variable. Saves time to write and is easier to type.

The other thing that I need, and the thing I really want, is the inverse of g. I’m going to call this function f(t). A more common notation would be to write g^{-1}(t) but we already have g'(x) in the works here. There is a limit to how many little one-stroke superscripts we need above g. This is the tradeoff to using ‘ for first derivatives. But here’s the important thing:

x = f(t) = g^{-1}(t)

Here, we have some extratextual information. We know the inverse of a logarithm is an exponential. We even have a standard notation for that. We’d write

x = f(t) = e^{ct}

in any context besides this essay as I’ve set it up.

What I would like to know next is: what is the derivative of f(t)? This sounds impossible to know, if we’re thinking of “the inverse of this integration”. It’s not. We have the Inverse Function Theorem to come to our aid. We encounter the Inverse Function Theorem briefly, in freshman calculus. There we use it to do as many as two problems and then hide away forever from the Inverse Function Theorem. (This is why it’s not mentioned in my quick little guide to how to take derivatives.) It reappears in real analysis for this sort of contingency. The inverse function theorem tells us, if f the inverse of g, that:

f'(t) = \frac{1}{g'(f(t))}

That g'(f(t)) means, use the rule for g'(x), with f(t) substituted in place of ‘x’. And now we see something magic:

f'(t) = \frac{1}{\frac{1}{c}\cdot\frac{1}{f(t)}}

f'(t) = c\cdot f(t)

And that is the wonderful thing about the exponential. Its derivative is a constant times its original value. That alone would make the exponential one of mathematics’ favorite functions. It allows us, for example, to transform differential equations into polynomials. (If you want everlasting fame, albeit among mathematicians, invent a new way to turn differential equations into polynomials.) Because we could turn, say,

f'''(t) - 3f''(t) + 3f'(t) -  f(t) = 0

into

c^3 e^{ct} - 3c^2 e^{ct} + 3c e^{ct} -  e^{ct} = 0

and then

\left(c^3 - 3c^2 + 3c - 1\right) e^{ct} = 0

by supposing that f(t) has to be e^{ct} for the correct value of c. Then all you need do is find a value of ‘c’ that makes that last equation true.

Supposing that the answer has this convenient form may remind you of searching for the lost keys over here where the light is better. But we find so many keys in this good light. If you carry on in mathematics you will never stop seeing this trick, although it may be disguised.

In part because it’s so easy to work with. In part because exponentials like this cover so much of what we might like to do. Let’s go back to looking at the derivative of the exponential function.

f'(t) = c\cdot f(t)

There are many ways to understand what a derivative is. One compelling way is to think of it as the rate of change. If you make a tiny change in t, how big is the change in f(t)? So what is the rate of change here?

We can pose this as a pretend-physics problem. This lets us use our physical intuition to understand things. This also is the transition between careful reasoning and ad-hoc arguments. Imagine a particle that, at time ‘t’, is at the position x = f(t) . What is its velocity? That’s the first derivative of its position, so, x' = f'(t) = c\cdot f(t) .

If we are using our physics intuition to understand this it helps to go all the way. Where is the particle? Can we plot that? … Sure. We’re used to matching real numbers with points on a number line. Go ahead and do that. Not to give away spoilers, but we will want to think about complex numbers too. Mathematicians are used to matching complex numbers with points on the Cartesian plane, though. The real part of the complex number matches the horizontal coordinate. The imaginary part matches the vertical coordinate.

So how is this particle moving?

To say for sure we need some value of t. All right. Pick your favorite number. That’s our t. f(t) follows from whatever your t was. What’s interesting is that the change also depends on c. There’s a couple possibilities. Let me go through them.

First, what if c is zero? Well, then the definition of g(t) was gibberish and we can’t have that. All right.

What if c is a positive real number? Well, then, f'(t) is some positive multiple of whatever f(t) was. The change is “away from zero”. The particle will push away from the origin. As t increases, f(t) increases, so it pushes away faster and faster. This is exponential growth.

What if c is a negative real number? Well, then, f'(t) is some negative multiple of whatever f(t) was. The change is “towards zero”. The particle pulls toward the origin. But the closer it gets the more slowly it approaches. If t is large enough, f(t) will be so tiny that c\cdot f(t) is too small to notice. The motion declines into imperceptibility.

What if c is an imaginary number, though?

So let’s suppose that c is equal to some real number b times \imath , where \imath^2 = -1 .

I need some way to describe what value f(t) has, for whatever your pick of t was. Let me say it’s equal to \alpha + \beta\imath , where \alpha and \beta are some real numbers whose value I don’t care about. What’s important here is that f(t) = \alpha + \beta\imath .

And, then, what’s the first derivative? The magnitude and direction of motion? That’s easy to calculate; it’ll be \imath b f(t) = -\beta + \alpha\imath . This is an interesting complex number. Do you see what’s interesting about it? I’ll get there next paragraph.

So f(t) matches some point on the Cartesian plane. But f'(t), the direction our particle moves with a small change in t, is another poiat whatever complex number f'(t) is as another point on the plane. The line segment connecting the origin to f(t) is perpendicular to the one connecting the origin to f'(t). The ‘motion’ of this particle is perpendicular to its position. And it always is. There’s several ways to show this. An easy one is to just pick some values for \alpha and \beta and b and try it out. This proof is not rigorous, but it is quick and convincing.

If your direction of motion is always perpendicular to your position, then what you’re doing is moving in a circle around the origin. This we pick up in physics, but it applies to the pretend-particle moving here. The exponentials of \imath t and 2 \imath t and -40 \imath t will all be points on a locus that’s a circle centered on the origin. The values will look like the cosine of an angle plus \imath times the sine of an angle.

And there, I think, we finally get some justification for the exponential of an imaginary number being a complex number. And for why exponentials might have anything to do with cosines and sines.

You might ask what if c is a complex number, if it’s equal to a + b\imath for some real numbers a and b. In this case, you get spirals as t changes. If a is positive, you get points spiralling outward as t increases. If a is negative, you get points spiralling inward toward zero as t increases. If b is positive the spirals go counterclockwise. If b is negative the spirals go clockwise. e^{(a + \imath b) t} is the same as e^{at} \cdot e^{\imath b t} .

This does depend on knowing the exponential of a sum of terms, such as of a + \imath b , is equal to the product of the exponential of those terms. This is a good thing to have in your portfolio. If I remember right, it comes in around the 25th thing. It’s an easy result to have if you already showed something about the logarithms of products.


Thank you for reading. I have this and all my A-to-Z topics for the year at this link. All my essays for this and past A-to-Z sequences are at this link. And I am still interested in topics to discuss in the coming weeks. Take care, please.

Using my A to Z Archives: Distribution (statistics)


There are some words that mathematicians use a lot, and to suggest things that are similar but not identical. “Normal” is one of them. Distribution is another, and in the End 2016 A-to-Z I discussed statistical distributions. To look at how a process affects a distribution, rather than a particular value, is one of the great breakthroughs of 19th century mathematical physics. This has implications about what it means to understand and to predict the behavior of a system.

I’m looking for G, H, and I topics for the All 2020 A-to-Z


When I look at how much I’m not getting ahead of deadline for these essays I’m amazed to think I should be getting topics for as much as five weeks out. Still, I should.

What I’d like is suggestions of things to write about. Any topics that have a name starting with the letters ‘G’, ‘H’, or ‘I’, that might be used for a topic. People with mathematical significance count, too. Please, with any nominations, let me know how to credit you for the topic. Also please mention any projects that you’re working on that could use attention. I try to credit and support people where I can.

These are the topics I’ve covered in past essays. I’m willing to revisit one if I realize I have fresh thoughts about it, too. I haven’t done so yet, but I’ll tell you, I was thinking hard about doing a rewrite on “dual”.


Topics I’ve already covered, starting with the letter ‘G’, are:


Topics I’ve already covered, starting with the letter ‘H’, are:


Topics I’ve already covered, starting with the letter ‘I’, are:


Thank you all for your thoughts, and for reading.

Using my A to Z Archives: Differential Equations


I’d like today to share a less-old essay. This one is from the 2019 A-to-Z, and it’s about one of those fundamental topics. Differential equations permeate much of mathematics. Someone might mistake them for being all of advanced mathematics, or at least the kind of mathematics that professionals do. The confusion is reasonable. So I talk a bit here about why they seem to be part of everything.

My All 2020 Mathematics A to Z: Delta


I have Dina Yagodich to thank for my inspiration this week. As will happen with these topics about something fundamental, this proved to be a hard topic to think about. I don’t know of any creative or professional projects Yagodich would like me to mention. I’ll pass them on if I learn of any.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Delta.

In May 1962 Mercury astronaut Deke Slayton did not orbit the Earth. He had been grounded for (of course) a rare medical condition. Before his grounding he had selected his flight’s callsign and capsule name: Delta 7. His backup, Wally Schirra, who did not fly in Slayton’s place, named his capsule the Sigma 7. Schirra chose sigma for its mathematical and scientific meaning, representing the sum of (in principle) many parts. Slayton said he chose Delta only because he would have been the fourth American into space and Δ is the fourth letter of the Greek alphabet. I believe it, but do notice how D is so prominent a letter in Slayton’s name. And S, Σ, prominent in both Slayton and Schirra’s.

Δ is also a prominent mathematics and engineering symbol. It has several meanings, with several of the most useful ones escaping mathematics and becoming vaguely known things. They blur together, as ideas that are useful and related and not identical will do.

If “Δ” evokes anything mathematical to a person it is “change”. This probably owes to space in the popular imagination. Astronauts talking about the delta-vee needed to return to Earth is some of the most accessible technical talk of Apollo 13, to pick one movie. After that it’s easy to think of pumping the car’s breaks as shedding some delta-vee. It secondarily owes to school, high school algebra classes testing people on their ability to tell how steep a line is. This gets described as the change-in-y over the change-in-x, or the delta-y over delta-x.

Δ prepended to a variable like x or y or v we read as “the change in”. It fits the astronaut and the algebra uses well. The letter Δ by itself means as much as the words “the change in” do. It describes what we’re thinking about, but waits for a noun to complete. We say “the” rather than “a”, I’ve noticed. The change in velocity needed to reach Earth may be one thing. But “the” change in x and y coordinates to find the slope of a line? We can use infinitely many possible changes and get a good result. We must say “the” because we consider one at a time.

Used like this Δ acts like an operator. It means something like “a difference between two values of the variable ” and lets us fill in the blank. How to pick those two values? Sometimes there’s a compelling choice. We often want to study data sampled at some schedule. The Δ then is between one sample’s value and the next. Or between the last sample value and the current one. Which is correct? Ask someone who specializes in difference equations. These are the usually numeric approximations to differential equations. They turn up often in signal processing or in understanding the flows of fluids or the interactions of particles. We like those because computers can solve them.

Δ, as this operator, can even be applied to itself. You read ΔΔ x as “the change in the change in x”. The prose is stilted, but we can understand it. It’s how the change in x has itself changed. We can imagine being interested in this Δ2 x. We can see this as a numerical approximation to the second derivative of x, and this gets us back to differential equations. There are similar results for ΔΔΔ x even if we don’t wish to read it all out.

In principle, Δ x can be any number. In practice, at least for an independent variable, it’s a small number, usually real. Often we’re lured into thinking of it as positive, because a phrase like “x + Δ x” looks like we’re making a number a little bigger than x. When you’re a mathematician or a quality-control tester you remember to consider “what if Δ x is negative”. From testing that learn you wrote your computer code wrong. We’re less likely to assume this positive-ness for the dependent variable. By the time we do enough mathematics to have opinions we’ve seen too many decreasing functions to overlook that Δ y might be negative.

Notice that in that last paragraph I faithfully wrote Δ x and Δ y. Never Δ bare, unless I forgot and cannot find it in copy-editing. I’ve said that Δ means “the change in”; to write it without some variable is like writing √ by itself. We can understand wishing to talk about “the square root of”, as a concept. Still it means something else than √ x does.

We do write Δ by itself. Even professionals do. Written like this we don’t mean “the change in [ something ]”. We instead mean “a number”. In this role the symbol means the same thing as x or y or t might, a way to refer to a number whose value we might not know. We might not care about. The implication is that it’s small, at least if it’s something to add to the independent variable. We use it when we ponder how things would be different if there were a small change in something.

Small but not tiny. Here we step into mathematics as a language, which can be as quirky and ambiguous as English. Because sometimes we use the lower-case δ. And this also means “a small number”. It connotes a smaller number than Δ. Is 0.01 a suitable value for Δ? Or for δ? Maybe. My inclination would be to think of that as Δ, reserving δ for “a small number of value we don’t care to specify”. This may be my quirk. Others might see it different.

We will use this lowercase δ as an operator too, thinking of things like “x + δ x”. As you’d guess, δ x connotes a small change in x. Smaller than would earn the title Δ x. There is no declaring how much smaller. It’s contextual. As with δ bare, my tendency is to think that Δ x might be a specific number but that δ x is “a perturbation”, the general idea of a small number. We can understand many interesting problems as a small change from something we already understand. That small change often earns such a δ operator.

There are smaller changes than δ x. There are infinitesimal differences. This is our attempt to make sense of “a number as close to zero as you can get without being zero”. We forego the Greek letters for this and revert to Roman letters: dx and dy and dt and the other marks of differential calculus. These are difficult numbers to discuss. It took more than a century of mathematicians’ work to find a way our experience with Δ x could inform us about dx. (We do not use ‘d’ alone to mean an even smaller change than δ. Sometimes we will in analysis write d with a space beside it, waiting for a variable to have its differential taken. I feel unsettled when I see it.)

Much of the completion of work we can credit to Augustin Cauchy, who’s credited with about 800 publications. It’s an intimidating record, even before considering its importance. Cauchy is, per Florian Cajori’s History Mathematical Notations, one of the persons we can credit with the use of Δ as symbol for “the change in”. (Section 610.) He’s not the only one. Leonhardt Euler and Johann Bernoulli (section 640) used Δ to represent a finite difference, the difference between two values.

I’m not aware of an explicit statement why Δ got the pick, as opposed to other letters. It’s hard to imagine a reason besides “difference starts with d”. That an etymology seems obvious does not make it so. It does seem to have a more compelling explanation than the use of “m” for the slope of a line, or \frac{\Delta y}{\Delta x} , though.

Slayton’s Mercury flight, performed by Scott Carpenter, did not involve any appreciable changes in orbit, a Δ v. No crewed spacecraft would until Gemini III. The Mercury flight did involve tests in orienting the spacecraft, in Δ θ and Δ φ on the angles of the spacecraft’s direction. These might have been in Slayton’s mind. He eventually flew into space on the Apollo-Soyuz Test Project, when an accident during landing exposed the crew to toxic gases. The investigation discovered a lesion on Slayton’s lung. A tiny thing, ultimately benign, which discovered earlier could have kicked him off the mission and altered his life so.


Thank you all for reading. I’m gathering all my 2020 A-to-Z essays at this link, and have all my A-to-Z essays of any kind at this link. Here is hoping there’s a good week ahead.