My 2018 Mathematics A To Z: Unit Fractions


My subject for today is another from Iva Sallay, longtime friend of the blog and creator of the Find the Factors recreational mathematics game. I think you’ll likely find something enjoyable at her site, whether it’s the puzzle or the neat bits of trivia as she works through all the counting numbers.

Cartoon of a thinking coati (it's a raccoon-like animal from Latin America); beside him are spelled out on Scrabble titles, 'MATHEMATICS A TO Z', on a starry background. Various arithmetic symbols are constellations in the background.
Art by Thomas K Dye, creator of the web comics Newshounds, Something Happens, and Infinity Refugees. His current project is Projection Edge. And you can get Projection Edge six months ahead of public publication by subscribing to his Patreon. And he’s on Twitter as @Newshoundscomic.

Unit Fractions.

We don’t notice how unit fractions are around us. Likely there’s some in your pocket. Or there have been recently. Think of what you do when paying for a thing, when it’s not a whole number of dollars. (Pounds, euros, whatever the unit of currency is.) Suppose you have exact change. What do you give for the 38 cents?

Likely it’s something like a 25-cent piece and a 10-cent piece and three one-cent pieces. This is an American and Canadian solution. I know that 20-cent pieces are more common than 25-cent ones worldwide. It doesn’t make much difference; if you want it to be three 10-cent, one five-cent, and three one-cent pieces that’s as good. And granted, outside the United States it’s growing common to drop pennies altogether and round prices off to a five- or ten-cent value. Again, it doesn’t make much difference.

But look at the coins. The 25 cent piece is one-quarter of a dollar. It’s even called that, and stamped that on one side. I sometimes hear a dime called “a tenth of a dollar”, although mostly by carnival barkers in one-reel cartoons of the 1930s. A nickel is one-twentieth of a dollar. A penny is one-hundredth. A 20-cent piece is one-fifth of a dollar. And there are half-dollars out there, although not in the United States, not really anymore.

(Pre-decimalized currencies offered even more unit fractions. Using old British coins, for familiarity-to-me and great names, there were farthings, 1/960th of a pound; halfpennies, 1/480th; pennies, 1/240th; threepence, 1/80th of a pound; groats, 1/60th; sixpence, 1/40th; florins, 1/10th; half-crowns, 1/8th; crowns, 1/4th. And what seem to the modern wallet like impossibly tiny fractions like the half-, third-, and quarter-farthings used where 1/3840th of a pound might be a needed sum of money.)

Unit fractions get named and defined somewhere in elementary school arithmetic. They go on, becoming forgotten sometime after that. They might make a brief reappearance in calculus. There are some rational functions that get easier to integrate if you think of them as the sums of fractions, with constant numerators and polynomial denominators. These aren’t unit fractions. A unit fraction has a 1, the unit, in the numerator. But we see units along the way to integrating \frac{1}{x^2 - x} as an example. And see it in the promise that there are still more amazing integrals to learn how to do.

They get more attention if you take a history of computation class. Or read the subject on your own. Unit fractions stand out in history. We learn the Ancient Egyptians worked with fractions as sums of unit fractions. That is, had they dollars, they would not look at the \frac{38}{100} we do. They would look at \frac{1}{4} plus \frac{1}{10} plus \frac{1}{100} plus \frac{1}{100} plus \frac{1}{100} . When we count change we are using, without noticing it, a very old computing scheme.

This isn’t quite true. The Ancient Egyptians seemed to shun repeating a unit like that. To use \frac{1}{100} once is fine; three times is suspicious. They would prefer something like \frac{1}{3} plus \frac{1}{24} plus \frac{1}{200} . Or maybe some other combination. I just wrote out the first one I found.

But there are many ways we can make 38 cents using ordinary coins of the realm. There are infinitely many ways to make up any fraction using unit fractions. There’s surely a most “efficient”. Most efficient might be the one which uses the fewest number of terms. Most efficient might be the one that uses the smallest denominators. Choose what you like; no one knows a scheme that always turns up the most efficient, either way. We can always find some representation, though. It may not be “good”, but it will exist, which may be good enough. Leonardo of Pisa, or as he got named in the 19th century, Fibonacci, proved that was true.

We may ask why the Egyptians used unit fractions. They seem inefficient compared to the way we work with fractions. Or, better, decimals. I’m not sure the question can have a coherent answer. Why do we have a fashion for converting fractions to a “proper” form? Why do we use the number of decimal points we do for a given calculation? Sometimes a particular mode of expression is the fashion. It comes to seem natural because everyone uses it. We do it too.

And there is practicality to them. Even efficiency. If you need π, for example, you can write it as 3 plus \frac{1}{8} plus \frac{1}{61} and your answer is off by under one part in a thousand. Combine this with the Egyptian method of multiplication, where you would think of (say) “11 times π” as “1 times π plus 2 times π plus 8 times π”. And with tables they had worked up which tell you what \frac{2}{8} and \frac{2}{61} would be in a normal representation. You can get rather good calculations without having to do more than addition and looking up doublings. Represent π as 3 plus \frac{1}{8} plus \frac{1}{61} plus \frac{1}{5020} and you’re correct to within one part in 130 million. That isn’t bad for having to remember four whole numbers.

(The Ancient Egyptians, like many of us, were not absolutely consistent in only using unit fractions. They had symbols to represent \frac{2}{3} and \frac{3}{4} , probably due to these numbers coming up all the time. Human systems vary to make the commonest stuff we do easier.)

Enough practicality or efficiency, if this is that. Is there beauty? Is there wonder? Certainly. Much of it is in number theory. Number theory splits between astounding results and results that would be astounding if we had any idea how to prove them. Many of the astounding results are about unit fractions. Take, for example, the harmonic series 1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \cdots . Truncate that series whenever you decide you’ve had enough. Different numbers of terms in this series will add up to different numbers. Eventually, infinitely many numbers. The numbers will grow ever-higher. There’s no number so big that it won’t, eventually, be surpassed by some long-enough truncated harmonic series. And yet, past the number 1, it’ll never touch a whole number again. Infinitely many partial sums. Partial sums differing from one another by one-googol-plex and smaller. And yet of the infinitely many whole numbers this series manages to miss them all, after its starting point. Worse, any set of consecutive terms, not even starting from 1, will never hit a whole number. I can understand a person who thinks mathematics is boring, but how can anyone not find it astonishing?

There are more strange, beautiful things. Consider heptagonal numbers, which Iva Sallay knows well. These are numbers like 1 and 7 and 18 and 34 and 55 and 1288. Take a heptagonal number of, oh, beads or dots or whatever, and you can lay them out to form a regular seven-sided figure. Add together the reciprocals of the heptagonal numbers. What do you get? It’s a weird number. It’s irrational, which you maybe would have guessed as more likely than not. But it’s also transcendental. Most real numbers are transcendental. But it’s often hard to prove any specific number is.

Unit fractions creep back into actual use. For example, in modular arithmetic, they offer a way to turn division back into multiplication. Division, in modular arithmetic, tends to be hard. Indeed, if you need an algorithm to make random-enough numbers, you often will do something with division in modular arithmetic. Suppose you want to divide by a number x, modulo y, and x and y are relatively prime, though. Then unit fractions tell us how to turn this into finding a greatest common denominator problem.

They teach us about our computers, too. Much of serious numerical mathematics involves matrix multiplication. Matrices are, for this purpose, tables of numbers. The Hilbert Matrix has elements that are entirely unit fractions. The Hilbert Matrix is really a family of square matrices. Pick any of the family you like. It can have two rows and two columns, or three rows and three columns, or ten rows and ten columns, or a million rows and a million columns. Your choice. The first row is made of the numbers 1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, and so on. The second row is made of the numbers \frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, and so on. The third row is made of the numbers \frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \frac{1}{6}, and so on. You see how this is going.

Matrices can have inverses. It’s not guaranteed; matrices are like that. But the Hilbert Matrix does. It’s another matrix, of the same size. All the terms in it are integers. Multiply the Hilbert Matrix by its inverse and you get the Identity Matrix. This is a matrix, the same number of rows and columns as you started with. But nearly every element in the identity matrix is zero. The only exceptions are on the diagonal — first row, first column; second row, second column; third row, third column; and so on. There, the identity matrix has a 1. The identity matrix works, for matrix multiplication, much like the real number 1 works for normal multiplication.

Matrix multiplication is tedious. It’s not hard, but it involves a lot of multiplying and adding and it just takes forever. So set a computer to do this! And you get … uh ..

For a small Hilbert Matrix and its inverse, you get an identity matrix. That’s good. For a large Hilbert Matrix and its inverse? You get garbage. Large isn’t maybe very large. A 12 by 12 matrix gives you trouble. A 14 by 14 matrix gives you a mess. Well, on my computer it does. Cute little laptop I got when my former computer suddenly died. On a better computer? One designed for computation? … You could do a little better. Less good than you might imagine.

The trouble is that computers don’t really do mathematics. They do an approximation of it, numerical computing. Most use a scheme called floating point arithmetic. It mostly works well. There’s a bit of error in every calculation. For most calculations, though, the error stays small. At least relatively small. The Hilbert Matrix, built of unit fractions, doesn’t respect this. It and its inverse have a “numerical instability”. Some kinds of calculations make errors explode. They’ll overwhelm the meaningful calculation. It’s a bit of a mess.

Numerical instability is something anyone doing mathematics on the computer must learn. Must grow comfortable with. Must understand. The matrix multiplications, and inverses, that the Hilbert Matrix involves highlights those. A great and urgent example of a subtle danger of computerized mathematics waits for us in these unit fractions. And we’ve known and felt comfortable with them for thousands of years.


There’ll be some mathematical term with a name starting ‘V’ that, barring surprises, should be posted Friday. What’ll it be? I have an idea at least. It’ll be available at this link, as are the rest of these glossary posts.

Advertisements

Where Are The Unfair Coins?


I had been reading Anand Sarwate’s essay “Randomized response, differential privacy, and the elusive biased coin”. It’s about the problem of how to get honest answers when the respondent might feel embarrassed to give an honest answer. And that’s interesting in its own right.

Along the way Sarwate mentioned the problem of finding a biased coin. In probability classes and probability problems we often call on the “fair coin” or “unbiased coin”. It’s a coin that, when tossed, comes up tails exactly half the time, and comes up heads the other half. An unfair coin, also called a biased coin, doesn’t do that. One side comes up, consistently, more often than half the time.

Both are beloved by probability instructors and textbook writers. It’s easy to get students to imagine flipping a coin, and there’s only two outcomes of a coin flip. So it’s easy to write, and solve, problems that teach how to calculate the probabilities of various events. Dice are almost as popular, but the average cube die has a whopping six possible outcomes. That can be a lot to deal with.

Between my title and Sarwate’s title you likely know where this is going. Someone (Andrew Gelman and Deborah Nolan) finally got to ask the question: are there even unfair coins? And the evidence seems to be that you really can’t bias a coin. It’s possible to throw a coin so that a desired side comes up more often than chance. But it’s not inherent to the coin, unless it’s a double-headed or double-tailed coin. I’d always casually assumed that biased coins were a thing, just like loaded dice were. Now I have to reconsider that. I’d also doubt this loaded-dice thing. But would dozens of charming lightly comic movies about Damon Runyonesque gamblers lie to me?

Keep The Change


I’m sorry to have fallen quiet for so long; the week has been a busy one and I haven’t been able to write as much as I want. I did want to point everyone to Geoffrey Brent’s elegant solution of my puzzle about loose change, and whether one could have different types of coin without changing the total number of value of those coins. It’s a wonderful proof and one I can’t see a way to improve on, including an argument for the smallest number of coins that allow this ambiguity. I want to give it some attention.

The proof that there is some ambiguous change amount is a neat sort known as an existence proof, which you likely made it through mathematics class without seeing. In an existence proof one doesn’t particularly care whether one finds a solution to the problem, but instead bothers trying to show whether a solution exists. In mathematics classes for people who aren’t becoming majors, the existence of a solution is nearly guaranteed, except when a problem is poorly proofread (I recall accidentally forcing an introduction-to-multivariable-calculus class to step into elliptic integrals, one of the most viciously difficult fields you can step into without requiring grad school backgrounds), or when the instructor wants to see whether people are just plugging numbers into formulas without understanding them. (I mean the formulas, although the numbers can be a bit iffy too.) (Spoiler alert: they have no idea what the formulas are for, but using them seems to make the instructor happy.)

Continue reading “Keep The Change”

Lose the Change


My Dearly Beloved, a professional philosopher, had explained to me once a fine point in the theory of just what it means to know something. I wouldn’t presume to try explaining that point (though I think I have it), but a core part of it is the thought experiment of remembering having put some change — we used a dime and a nickel — in your pocket, and finding later that you did have that same amount of money although not necessarily the same change — say, that you had three nickels instead.

That spun off a cute little side question that I’ll give to any needy recreational mathematician. It’s easy to imagine this problem where you remember having 15 cents in your pocket, and you do indeed have them, but you have a different number of coins from what you remember: three nickels instead of a dime and a nickel. Or you could remember having two coins, and indeed have two, but you have a different amount from what you remember: two dimes instead of a dime and a nickel.

Is it possible to remember correctly both the total number of coins you have, and the total value of those coins, while being mistaken about the number of each type? That is, could you remember rightly you have six coins and how much they add up to, but have the count of pennies, nickels, dimes, and quarters wrong? (In the United States there are also 50-cent and dollar coins minted, but they’re novelties and can be pretty much ignored. It’s all 1, 5, 10, and 25-cent pieces.) And can you prove it?

Fibonacci, a Comic Strip, and Venice


The comic strip Frazz, by Jef Mallett, touches another bit of mathematics humor. I imagine if I were better-organized I’d gather all the math comic strips I see over a whole week and report on them all at once, but, I’m still learning the rules of this blog, other than that anyone writing about mathematics has to bring up Fibonacci whether they want to or not.

The association that sequins brings up for me now, though, and has ever since a book I read about the United States’s war on the Barbary Coast pirates, is that the main coin of Venice for over 500 years of its existence as an independent republic was the sequin, giving me notions of financial transactions being all sparkly and prone to blowing away in a stiff breeze. It wasn’t that kind of sequin, of course or even any sort of particularly small coin. The Venetian sequin was a rather average-looking gold coin, weighing at least nominally three and a half grams, and the name was a mutation of “zecchino”, after the name of Venice’s mint. But, apparently, the practice of sewing coins like this into women’s clothing or accessories lead to the attaching of small, shiny objects into clothing or accessories, and so gave us sequins after all.

A listing on a coin collectors site tells me the Venetian sequin was about two centimeters in diameter, which isn’t ridiculously tiny at least. I’m not sure if that is a reliable guide to the size, although since it’s trying to sell me rare coins, probably it’s not too far off. Unfortunately most of the top couple pages of Google hits on “Venetian sequin coin size” brings up copies of Wikipedia’s report, which fails to mention physical size. An Ottoman sequin at the British Museum’s web site lists its diameter as 2.4 centimeters, but its weight at four and a third grams.

Illicitly Counted Coins


The past month I’ve had the joy of teaching a real, proper class again, after a hiatus of a few years. The hiatus has given me the chance to notice some things that I would do because that was the way I had done them, and made it easier to spot things that I could do differently.

To get a collection of data about which we could calculate statistics, I had everyone in the class flip a coin twenty times. Besides giving everyone something to do besides figure out which of my strange mutterings should be written down in case they turn out to be on the test, the result would give me a bunch of numbers, centered around ten, once they reported the number of heads which turned up. Counting the number of heads out of a set of coin flips is one of the traditional exercises to generate probability-and-statistics numbers.

Good examples are some of the most precious and needed things for teaching mathematics. It’s never enough to learn a formula; one needs to learn how to look at a problem, think of what one wants to know as a result of its posing, identify what one needs to get those results, and pick out which bits of information in the problem and which formulas allow the result to be found. It’s all the better if an example resembles something normal people would find to raise a plausible question. Here, we may not be all that interested in how many times a coin comes up heads or tails, but we can imagine being interested in how often something happens given a number of chances for it to happen, and how much that count of happenings can vary if we watch several different runs.

Continue reading “Illicitly Counted Coins”