A friend sent me this video, after realizing that I had missed an earlier mention of it and thought it weird I never commented on it. And I wanted to pass it on, partly because it’s neat and partly because I haven’t done enough writing about topics besides the comics recently.
Particle Life: A Game Of Life Made Of Particles is, at least in video form, a fascinating little puzzle. The Game of Life referenced is one that anybody reading a pop mathematics blog is likely to know. But here goes. The Game of Life is this iterative process. We look at a grid of points, with each point having one of a small set of possible states. Traditionally, just two. At each iteration we go through every grid location. We might change that state. Whether we do depends on some simple rules. In the original Game of Life it’s (depending on your point of view) two or either three rules. A common variation is to include “mutations”, where a location’s state changes despite what the other rules would dictate. And the fascinating thing is that these very simple rules can yield incredibly complicated and beautiful patterns. It’s a neat mathematical refutation of the idea that life is so complicated that it must take a supernatural force to generate. It turns out that many things following simple rules can produce complicated patterns. We will often call them “unpredictable”, although (unless we do have mutations) they are literally perfectly predictable. They’re just chaotic, with tiny changes in the starting conditions often resulting in huge changes in behavior quickly.
This Particle Life problem is built on similar principles. The model is different. Instead of grid locations there are a cloud of particles. The rules are a handful of laws of attraction-or-repulsion. That is, that each particle exerts a force on all the other particles in the system. This is very like the real physics, of clouds of asteroids or of masses of electrically charged gasses or the like. But, like, a cloud of asteroids has everything following the same rule, everything attracts everything else with an intensity that depends on their distance apart. Masses of charged particles follow two rules, particles attracting or repelling each other with an intensity that depends on their distance apart.
This simulation gets more playful. There can be many kinds of particles. They can follow different and non-physically-realistic rules. Like, a red particle can be attracted to a blue, while a blue particle is repelled by a red. A green particle can be attracted to a red with twice the intensity that a red particle’s attracted to a green. Whatever; set different rules and you create different mock physics.
The result is, as the video shows, particles moving in “unpredictable” ways. Again, here, it’s “unpredictable” in the same way that I couldn’t predict when my birthday will next fall on a Tuesday. That is to say, it’s absolutely predictable; it’s just not obvious before you do the calculations. Still, it’s wonderful watching and tinkering with, if you have time to create some physics simulators. There’s source code for one in C++ that you might use. If you’re looking for little toy projects to write on your own, I suspect this would be a good little project to practice your Lua/LOVE coding, too.
My subject for today is another from Iva Sallay, longtime friend of the blog and creator of the Find the Factors recreational mathematics game. I think you’ll likely find something enjoyable at her site, whether it’s the puzzle or the neat bits of trivia as she works through all the counting numbers.
We don’t notice how unit fractions are around us. Likely there’s some in your pocket. Or there have been recently. Think of what you do when paying for a thing, when it’s not a whole number of dollars. (Pounds, euros, whatever the unit of currency is.) Suppose you have exact change. What do you give for the 38 cents?
Likely it’s something like a 25-cent piece and a 10-cent piece and three one-cent pieces. This is an American and Canadian solution. I know that 20-cent pieces are more common than 25-cent ones worldwide. It doesn’t make much difference; if you want it to be three 10-cent, one five-cent, and three one-cent pieces that’s as good. And granted, outside the United States it’s growing common to drop pennies altogether and round prices off to a five- or ten-cent value. Again, it doesn’t make much difference.
But look at the coins. The 25 cent piece is one-quarter of a dollar. It’s even called that, and stamped that on one side. I sometimes hear a dime called “a tenth of a dollar”, although mostly by carnival barkers in one-reel cartoons of the 1930s. A nickel is one-twentieth of a dollar. A penny is one-hundredth. A 20-cent piece is one-fifth of a dollar. And there are half-dollars out there, although not in the United States, not really anymore.
(Pre-decimalized currencies offered even more unit fractions. Using old British coins, for familiarity-to-me and great names, there were farthings, 1/960th of a pound; halfpennies, 1/480th; pennies, 1/240th; threepence, 1/80th of a pound; groats, 1/60th; sixpence, 1/40th; florins, 1/10th; half-crowns, 1/8th; crowns, 1/4th. And what seem to the modern wallet like impossibly tiny fractions like the half-, third-, and quarter-farthings used where 1/3840th of a pound might be a needed sum of money.)
Unit fractions get named and defined somewhere in elementary school arithmetic. They go on, becoming forgotten sometime after that. They might make a brief reappearance in calculus. There are some rational functions that get easier to integrate if you think of them as the sums of fractions, with constant numerators and polynomial denominators. These aren’t unit fractions. A unit fraction has a 1, the unit, in the numerator. But we see units along the way to integrating as an example. And see it in the promise that there are still more amazing integrals to learn how to do.
They get more attention if you take a history of computation class. Or read the subject on your own. Unit fractions stand out in history. We learn the Ancient Egyptians worked with fractions as sums of unit fractions. That is, had they dollars, they would not look at the we do. They would look at plus plus plus plus . When we count change we are using, without noticing it, a very old computing scheme.
This isn’t quite true. The Ancient Egyptians seemed to shun repeating a unit like that. To use once is fine; three times is suspicious. They would prefer something like plus plus . Or maybe some other combination. I just wrote out the first one I found.
But there are many ways we can make 38 cents using ordinary coins of the realm. There are infinitely many ways to make up any fraction using unit fractions. There’s surely a most “efficient”. Most efficient might be the one which uses the fewest number of terms. Most efficient might be the one that uses the smallest denominators. Choose what you like; no one knows a scheme that always turns up the most efficient, either way. We can always find some representation, though. It may not be “good”, but it will exist, which may be good enough. Leonardo of Pisa, or as he got named in the 19th century, Fibonacci, proved that was true.
We may ask why the Egyptians used unit fractions. They seem inefficient compared to the way we work with fractions. Or, better, decimals. I’m not sure the question can have a coherent answer. Why do we have a fashion for converting fractions to a “proper” form? Why do we use the number of decimal points we do for a given calculation? Sometimes a particular mode of expression is the fashion. It comes to seem natural because everyone uses it. We do it too.
And there is practicality to them. Even efficiency. If you need π, for example, you can write it as 3 plus plus and your answer is off by under one part in a thousand. Combine this with the Egyptian method of multiplication, where you would think of (say) “11 times π” as “1 times π plus 2 times π plus 8 times π”. And with tables they had worked up which tell you what and would be in a normal representation. You can get rather good calculations without having to do more than addition and looking up doublings. Represent π as 3 plus plus plus and you’re correct to within one part in 130 million. That isn’t bad for having to remember four whole numbers.
(The Ancient Egyptians, like many of us, were not absolutely consistent in only using unit fractions. They had symbols to represent and , probably due to these numbers coming up all the time. Human systems vary to make the commonest stuff we do easier.)
Enough practicality or efficiency, if this is that. Is there beauty? Is there wonder? Certainly. Much of it is in number theory. Number theory splits between astounding results and results that would be astounding if we had any idea how to prove them. Many of the astounding results are about unit fractions. Take, for example, the harmonic series . Truncate that series whenever you decide you’ve had enough. Different numbers of terms in this series will add up to different numbers. Eventually, infinitely many numbers. The numbers will grow ever-higher. There’s no number so big that it won’t, eventually, be surpassed by some long-enough truncated harmonic series. And yet, past the number 1, it’ll never touch a whole number again. Infinitely many partial sums. Partial sums differing from one another by one-googol-plex and smaller. And yet of the infinitely many whole numbers this series manages to miss them all, after its starting point. Worse, any set of consecutive terms, not even starting from 1, will never hit a whole number. I can understand a person who thinks mathematics is boring, but how can anyone not find it astonishing?
There are more strange, beautiful things. Consider heptagonal numbers, which Iva Sallay knows well. These are numbers like 1 and 7 and 18 and 34 and 55 and 1288. Take a heptagonal number of, oh, beads or dots or whatever, and you can lay them out to form a regular seven-sided figure. Add together the reciprocals of the heptagonal numbers. What do you get? It’s a weird number. It’s irrational, which you maybe would have guessed as more likely than not. But it’s also transcendental. Most real numbers are transcendental. But it’s often hard to prove any specific number is.
Unit fractions creep back into actual use. For example, in modular arithmetic, they offer a way to turn division back into multiplication. Division, in modular arithmetic, tends to be hard. Indeed, if you need an algorithm to make random-enough numbers, you often will do something with division in modular arithmetic. Suppose you want to divide by a number x, modulo y, and x and y are relatively prime, though. Then unit fractions tell us how to turn this into finding a greatest common denominator problem.
They teach us about our computers, too. Much of serious numerical mathematics involves matrix multiplication. Matrices are, for this purpose, tables of numbers. The Hilbert Matrix has elements that are entirely unit fractions. The Hilbert Matrix is really a family of square matrices. Pick any of the family you like. It can have two rows and two columns, or three rows and three columns, or ten rows and ten columns, or a million rows and a million columns. Your choice. The first row is made of the numbers and so on. The second row is made of the numbers and so on. The third row is made of the numbers and so on. You see how this is going.
Matrices can have inverses. It’s not guaranteed; matrices are like that. But the Hilbert Matrix does. It’s another matrix, of the same size. All the terms in it are integers. Multiply the Hilbert Matrix by its inverse and you get the Identity Matrix. This is a matrix, the same number of rows and columns as you started with. But nearly every element in the identity matrix is zero. The only exceptions are on the diagonal — first row, first column; second row, second column; third row, third column; and so on. There, the identity matrix has a 1. The identity matrix works, for matrix multiplication, much like the real number 1 works for normal multiplication.
Matrix multiplication is tedious. It’s not hard, but it involves a lot of multiplying and adding and it just takes forever. So set a computer to do this! And you get … uh ..
For a small Hilbert Matrix and its inverse, you get an identity matrix. That’s good. For a large Hilbert Matrix and its inverse? You get garbage. Large isn’t maybe very large. A 12 by 12 matrix gives you trouble. A 14 by 14 matrix gives you a mess. Well, on my computer it does. Cute little laptop I got when my former computer suddenly died. On a better computer? One designed for computation? … You could do a little better. Less good than you might imagine.
The trouble is that computers don’t really do mathematics. They do an approximation of it, numerical computing. Most use a scheme called floating point arithmetic. It mostly works well. There’s a bit of error in every calculation. For most calculations, though, the error stays small. At least relatively small. The Hilbert Matrix, built of unit fractions, doesn’t respect this. It and its inverse have a “numerical instability”. Some kinds of calculations make errors explode. They’ll overwhelm the meaningful calculation. It’s a bit of a mess.
Numerical instability is something anyone doing mathematics on the computer must learn. Must grow comfortable with. Must understand. The matrix multiplications, and inverses, that the Hilbert Matrix involves highlights those. A great and urgent example of a subtle danger of computerized mathematics waits for us in these unit fractions. And we’ve known and felt comfortable with them for thousands of years.
There’ll be some mathematical term with a name starting ‘V’ that, barring surprises, should be posted Friday. What’ll it be? I have an idea at least. It’ll be available at this link, as are the rest of these glossary posts.
The reruns of Donald Duck comics which appear at creators.com recently offered the above daily strip, featuring Ludwig von Drake and one of those computers of the kind movies and TV shows and comic strips had before anybody had computers of their own and, of course, the classic IBM motto that maybe they still have but I never hear anyone talking about it except as something from the distant and musty past. (Unfortunately, creators.com doesn’t note the date a strip originally ran, so all I can say is the strip first ran sometime after September of 1961 and before whenever Disney stopped having original daily strips drawn; I haven’t been able to find any hint of when that was other than not earlier than 1969 when cartoonist Al Taliaferro retired from it.)
[ Curious: one of the search engine terms which brought people here yesterday was “inner obnoxious”. I can think of when I’d used the words together, eg, in a phrase like “your inner obnoxious twelve-year-old”, the person who makes any kind of attempt at instruction difficult. But who’s searching for that? I find also that “the gil blog by norm feuti” and “heavenly nostrils” brought me visitors so, good for everyone, I think. ]
So polynomials have a number of really nice properties. They’re easy to work with, which is a big one. We might work with difficult mathematical objects, but, rather as with people, we’ll only work with the difficult if they offer something worthwhile in trade, such as solving problems we otherwise can’t hope to tackle. Polynomials are nice and friendly, uncomplaining, and as mathematical objects go, quite un-difficult. Polynomials can be used to approximate any function, which is another big one, as long as we don’t take that “any function” too literally. We still have to think about it some. But here’s an advantage so big it’s almost invisible: to evaluate a polynomial we take some number x and raise it to a variety of powers, which we get by multiplying x by itself over and over again. We take each of those powers and multiply them by a corresponding number, a coefficient. We then add up the products of those coefficients with those powers of x. In all that time we’ve done something great.