## My 2018 Mathematics A To Z: Unit Fractions

My subject for today is another from Iva Sallay, longtime friend of the blog and creator of the Find the Factors recreational mathematics game. I think you’ll likely find something enjoyable at her site, whether it’s the puzzle or the neat bits of trivia as she works through all the counting numbers.

# Unit Fractions.

We don’t notice how unit fractions are around us. Likely there’s some in your pocket. Or there have been recently. Think of what you do when paying for a thing, when it’s not a whole number of dollars. (Pounds, euros, whatever the unit of currency is.) Suppose you have exact change. What do you give for the 38 cents?

Likely it’s something like a 25-cent piece and a 10-cent piece and three one-cent pieces. This is an American and Canadian solution. I know that 20-cent pieces are more common than 25-cent ones worldwide. It doesn’t make much difference; if you want it to be three 10-cent, one five-cent, and three one-cent pieces that’s as good. And granted, outside the United States it’s growing common to drop pennies altogether and round prices off to a five- or ten-cent value. Again, it doesn’t make much difference.

But look at the coins. The 25 cent piece is one-quarter of a dollar. It’s even called that, and stamped that on one side. I sometimes hear a dime called “a tenth of a dollar”, although mostly by carnival barkers in one-reel cartoons of the 1930s. A nickel is one-twentieth of a dollar. A penny is one-hundredth. A 20-cent piece is one-fifth of a dollar. And there are half-dollars out there, although not in the United States, not really anymore.

(Pre-decimalized currencies offered even more unit fractions. Using old British coins, for familiarity-to-me and great names, there were farthings, 1/960th of a pound; halfpennies, 1/480th; pennies, 1/240th; threepence, 1/80th of a pound; groats, 1/60th; sixpence, 1/40th; florins, 1/10th; half-crowns, 1/8th; crowns, 1/4th. And what seem to the modern wallet like impossibly tiny fractions like the half-, third-, and quarter-farthings used where 1/3840th of a pound might be a needed sum of money.)

Unit fractions get named and defined somewhere in elementary school arithmetic. They go on, becoming forgotten sometime after that. They might make a brief reappearance in calculus. There are some rational functions that get easier to integrate if you think of them as the sums of fractions, with constant numerators and polynomial denominators. These aren’t unit fractions. A unit fraction has a 1, the unit, in the numerator. But we see units along the way to integrating $\frac{1}{x^2 - x}$ as an example. And see it in the promise that there are still more amazing integrals to learn how to do.

They get more attention if you take a history of computation class. Or read the subject on your own. Unit fractions stand out in history. We learn the Ancient Egyptians worked with fractions as sums of unit fractions. That is, had they dollars, they would not look at the $\frac{38}{100}$ we do. They would look at $\frac{1}{4}$ plus $\frac{1}{10}$ plus $\frac{1}{100}$ plus $\frac{1}{100}$ plus $\frac{1}{100}$. When we count change we are using, without noticing it, a very old computing scheme.

This isn’t quite true. The Ancient Egyptians seemed to shun repeating a unit like that. To use $\frac{1}{100}$ once is fine; three times is suspicious. They would prefer something like $\frac{1}{3}$ plus $\frac{1}{24}$ plus $\frac{1}{200}$. Or maybe some other combination. I just wrote out the first one I found.

But there are many ways we can make 38 cents using ordinary coins of the realm. There are infinitely many ways to make up any fraction using unit fractions. There’s surely a most “efficient”. Most efficient might be the one which uses the fewest number of terms. Most efficient might be the one that uses the smallest denominators. Choose what you like; no one knows a scheme that always turns up the most efficient, either way. We can always find some representation, though. It may not be “good”, but it will exist, which may be good enough. Leonardo of Pisa, or as he got named in the 19th century, Fibonacci, proved that was true.

We may ask why the Egyptians used unit fractions. They seem inefficient compared to the way we work with fractions. Or, better, decimals. I’m not sure the question can have a coherent answer. Why do we have a fashion for converting fractions to a “proper” form? Why do we use the number of decimal points we do for a given calculation? Sometimes a particular mode of expression is the fashion. It comes to seem natural because everyone uses it. We do it too.

And there is practicality to them. Even efficiency. If you need π, for example, you can write it as 3 plus $\frac{1}{8}$ plus $\frac{1}{61}$ and your answer is off by under one part in a thousand. Combine this with the Egyptian method of multiplication, where you would think of (say) “11 times π” as “1 times π plus 2 times π plus 8 times π”. And with tables they had worked up which tell you what $\frac{2}{8}$ and $\frac{2}{61}$ would be in a normal representation. You can get rather good calculations without having to do more than addition and looking up doublings. Represent π as 3 plus $\frac{1}{8}$ plus $\frac{1}{61}$ plus $\frac{1}{5020}$ and you’re correct to within one part in 130 million. That isn’t bad for having to remember four whole numbers.

(The Ancient Egyptians, like many of us, were not absolutely consistent in only using unit fractions. They had symbols to represent $\frac{2}{3}$ and $\frac{3}{4}$, probably due to these numbers coming up all the time. Human systems vary to make the commonest stuff we do easier.)

Enough practicality or efficiency, if this is that. Is there beauty? Is there wonder? Certainly. Much of it is in number theory. Number theory splits between astounding results and results that would be astounding if we had any idea how to prove them. Many of the astounding results are about unit fractions. Take, for example, the harmonic series $1 + \frac{1}{2} + \frac{1}{3} + \frac{1}{4} + \frac{1}{5} + \frac{1}{6} + \cdots$. Truncate that series whenever you decide you’ve had enough. Different numbers of terms in this series will add up to different numbers. Eventually, infinitely many numbers. The numbers will grow ever-higher. There’s no number so big that it won’t, eventually, be surpassed by some long-enough truncated harmonic series. And yet, past the number 1, it’ll never touch a whole number again. Infinitely many partial sums. Partial sums differing from one another by one-googol-plex and smaller. And yet of the infinitely many whole numbers this series manages to miss them all, after its starting point. Worse, any set of consecutive terms, not even starting from 1, will never hit a whole number. I can understand a person who thinks mathematics is boring, but how can anyone not find it astonishing?

There are more strange, beautiful things. Consider heptagonal numbers, which Iva Sallay knows well. These are numbers like 1 and 7 and 18 and 34 and 55 and 1288. Take a heptagonal number of, oh, beads or dots or whatever, and you can lay them out to form a regular seven-sided figure. Add together the reciprocals of the heptagonal numbers. What do you get? It’s a weird number. It’s irrational, which you maybe would have guessed as more likely than not. But it’s also transcendental. Most real numbers are transcendental. But it’s often hard to prove any specific number is.

Unit fractions creep back into actual use. For example, in modular arithmetic, they offer a way to turn division back into multiplication. Division, in modular arithmetic, tends to be hard. Indeed, if you need an algorithm to make random-enough numbers, you often will do something with division in modular arithmetic. Suppose you want to divide by a number x, modulo y, and x and y are relatively prime, though. Then unit fractions tell us how to turn this into finding a greatest common denominator problem.

They teach us about our computers, too. Much of serious numerical mathematics involves matrix multiplication. Matrices are, for this purpose, tables of numbers. The Hilbert Matrix has elements that are entirely unit fractions. The Hilbert Matrix is really a family of square matrices. Pick any of the family you like. It can have two rows and two columns, or three rows and three columns, or ten rows and ten columns, or a million rows and a million columns. Your choice. The first row is made of the numbers $1, \frac{1}{2}, \frac{1}{3}, \frac{1}{4},$ and so on. The second row is made of the numbers $\frac{1}{2}, \frac{1}{3}, \frac{1}{4}, \frac{1}{5},$ and so on. The third row is made of the numbers $\frac{1}{3}, \frac{1}{4}, \frac{1}{5}, \frac{1}{6},$ and so on. You see how this is going.

Matrices can have inverses. It’s not guaranteed; matrices are like that. But the Hilbert Matrix does. It’s another matrix, of the same size. All the terms in it are integers. Multiply the Hilbert Matrix by its inverse and you get the Identity Matrix. This is a matrix, the same number of rows and columns as you started with. But nearly every element in the identity matrix is zero. The only exceptions are on the diagonal — first row, first column; second row, second column; third row, third column; and so on. There, the identity matrix has a 1. The identity matrix works, for matrix multiplication, much like the real number 1 works for normal multiplication.

Matrix multiplication is tedious. It’s not hard, but it involves a lot of multiplying and adding and it just takes forever. So set a computer to do this! And you get … uh ..

For a small Hilbert Matrix and its inverse, you get an identity matrix. That’s good. For a large Hilbert Matrix and its inverse? You get garbage. Large isn’t maybe very large. A 12 by 12 matrix gives you trouble. A 14 by 14 matrix gives you a mess. Well, on my computer it does. Cute little laptop I got when my former computer suddenly died. On a better computer? One designed for computation? … You could do a little better. Less good than you might imagine.

The trouble is that computers don’t really do mathematics. They do an approximation of it, numerical computing. Most use a scheme called floating point arithmetic. It mostly works well. There’s a bit of error in every calculation. For most calculations, though, the error stays small. At least relatively small. The Hilbert Matrix, built of unit fractions, doesn’t respect this. It and its inverse have a “numerical instability”. Some kinds of calculations make errors explode. They’ll overwhelm the meaningful calculation. It’s a bit of a mess.

Numerical instability is something anyone doing mathematics on the computer must learn. Must grow comfortable with. Must understand. The matrix multiplications, and inverses, that the Hilbert Matrix involves highlights those. A great and urgent example of a subtle danger of computerized mathematics waits for us in these unit fractions. And we’ve known and felt comfortable with them for thousands of years.

There’ll be some mathematical term with a name starting ‘V’ that, barring surprises, should be posted Friday. What’ll it be? I have an idea at least. It’ll be available at this link, as are the rest of these glossary posts.

## Reading the Comics, August 15, 2017: Cake Edition

It was again a week just busy enough that I’m comfortable splitting the Reading The Comments thread into two pieces. It’s also a week that made me think about cake. So, I’m happy with the way last week shaped up, as far as comic strips go. Other stuff could have used a lot of work Let’s read.

Stephen Bentley’s Herb and Jamaal rerun for the 13th depicts “teaching the kids math” by having them divide up a cake fairly. I accept this as a viable way to make kids interested in the problem. Cake-slicing problems are a corner of game theory as it addresses questions we always find interesting. How can a resource be fairly divided? How can it be divided if there is not a trusted authority? How can it be divided if the parties do not trust one another? Why do we not have more cake? The kids seem to be trying to divide the cake by volume, which could be fair. If the cake slice is a small enough wedge they can likely get near enough a perfect split by ordinary measures. If it’s a bigger wedge they’d need calculus to get the answer perfect. It’ll be well-approximated by solids of revolution. But they likely don’t need perfection.

This is assuming the value of the icing side is not held in greater esteem than the bare-cake sides. This is not how I would value the parts of the cake. They’ll need to work something out about that, too.

Mac King and Bill King’s Magic in a Minute for the 13th features a bit of numerical wizardry. That the dates in a three-by-three block in a calendar will add up to nine times the centered date. Why this works is good for a bit of practice in simplifying algebraic expressions. The stunt will be more impressive if you can multiply by nine in your head. I’d do that by taking ten times the given date and then subtracting the original date. I won’t say I’m fond of the idea of subtracting 23 from 230, or 17 from 170. But a skilled performer could do something interesting while trying to do this subtraction. (And if you practice the trick you can get the hang of the … fifteen? … different possible answers.)

Bill Amend’s FoxTrot rerun for the 14th mentions mathematics. Young nerd Jason’s trying to get back into hand-raising form. Arithmetic has considerable advantages as a thing to practice answering teachers. The questions have clear, definitely right answers, that can be worked out or memorized ahead of time, and can be asked in under half a panel’s word balloon space. I deduce the strip first ran the 21st of August, 2006, although that image seems to be broken.

Ed Allison’s Unstrange Phenomena for the 14th suggests changes in the definition of the mile and the gallon to effortlessly improve the fuel economy of cars. As befits Allison’s Dadaist inclinations the numbers don’t work out. As it is, if you defined a New Mile of 7,290 feet (and didn’t change what a foot was) and a New Gallon of 192 fluid ounces (and didn’t change what an old fluid ounce was) then a 20 old-miles-per-old-gallon car would come out to about 21.7 new-miles-per-new-gallon. Commenter Del_Grande points out that if the New Mile were 3,960 feet then the calculation would work out. This inspires in me curiosity. Did Allison figure out the numbers that would work and then make a mistake in the final art? Or did he pick funny-looking numbers and not worry about whether they made sense? No way to tell from here, I suppose. (Allison doesn’t mention ways to get in touch on the comic’s About page and I’ve only got the weakest links into the professional cartoon community.)

Patrick Roberts’s Todd the Dinosaur for the 15th mentions long division as the stuff of nightmares. So it is. I guess MathWorld and Wikipedia endorse calling 128 divided by 4 long division, although I’m not sure I’m comfortable with that. This may be idiosyncratic; I’d thought of long division as where the divisor is two or more digits. A three-digit number divided by a one-digit one doesn’t seem long to me. I’d just think that was division. I’m curious what readers’ experiences have been.

## The Summer 2017 Mathematics A To Z: Gaussian Primes

Once more do I have Gaurish to thank for the day’s topic. (There’ll be two more chances this week, providing I keep my writing just enough ahead of deadline.) This one doesn’t touch category theory or topology.

# Gaussian Primes.

I keep touching on group theory here. It’s a field that’s about what kinds of things can work like arithmetic does. A group is a set of things that you can add together. At least, you can do something that works like adding regular numbers together does. A ring is a set of things that you can add and multiply together.

There are many interesting rings. Here’s one. It’s called the Gaussian Integers. They’re made of numbers we can write as $a + b\imath$, where ‘a’ and ‘b’ are some integers. $\imath$ is what you figure, that number that multiplied by itself is -1. These aren’t the complex-valued numbers, you notice, because ‘a’ and ‘b’ are always integers. But you add them together the way you add complex-valued numbers together. That is, $a + b\imath$ plus $c + d\imath$ is the number $(a + c) + (b + d)\imath$. And you multiply them the way you multiply complex-valued numbers together. That is, $a + b\imath$ times $c + d\imath$ is the number $(a\cdot c - b\cdot d) + (a\cdot d + b\cdot c)\imath$.

We created something that has addition and multiplication. It picks up subtraction for free. It doesn’t have division. We can create rings that do, but this one won’t, any more than regular old integers have division. But we can ask what other normal-arithmetic-like stuff these Gaussian integers do have. For instance, can we factor numbers?

This isn’t an obvious one. No, we can’t expect to be able to divide one Gaussian integer by another. But we can’t expect to divide a regular old integer by another, not and get an integer out of it. That doesn’t mean we can’t factor them. It means we divide the regular old integers into a couple classes. There’s prime numbers. There’s composites. There’s the unit, the number 1. There’s zero. We know prime numbers; they’re 2, 3, 5, 7, and so on. Composite numbers are the ones you get by multiplying prime numbers together: 4, 6, 8, 9, 10, and so on. 1 and 0 are off on their own. Leave them there. We can’t divide any old integer by any old integer. But we can say an integer is equal to this string of prime numbers multiplied together. This gives us a handle by which we can prove a lot of interesting results.

We can do the same with Gaussian integers. We can divide them up into Gaussian primes, Gaussian composites, units, and zero. The words mean what they mean for regular old integers. A Gaussian composite can be factored into the multiples of Gaussian primes. Gaussian primes can’t be factored any further.

If we know what the prime numbers are for regular old integers we can tell whether something’s a Gaussian prime. Admittedly, knowing all the prime numbers is a challenge. But a Gaussian integer $a + b\imath$ will be prime whenever a couple simple-to-test conditions are true. First is if ‘a’ and ‘b’ are both not zero, but $a^2 + b^2$ is a prime number. So, for example, $5 + 4\imath$ is a Gaussian prime.

You might ask, hey, would $-5 - 4\imath$ also be a Gaussian prime? That’s also got components that are integers, and the squares of them add up to a prime number (41). Well-spotted. Gaussian primes appear in quartets. If $a + b\imath$ is a Gaussian prime, so is $-a -b\imath$. And so are $-b + a\imath$ and $b - a\imath$.

There’s another group of Gaussian primes. These are the numbers $a + b\imath$ where either ‘a’ or ‘b’ is zero. Then the other one is, if positive, three more than a whole multiple of four. If it’s negative, then it’s three less than a whole multiple of four. So ‘3’ is a Gaussian prime, as is -3, and as is $3\imath$ and so is $-3\imath$.

This has strange effects. Like, ‘3’ is a prime number in the regular old scheme of things. It’s also a Gaussian prime. But familiar other prime numbers like ‘2’ and ‘5’? Not anymore. Two is equal to $(1 + \imath) \cdot (1 - \imath)$; both of those terms are Gaussian primes. Five is equal to $(2 + \imath) \cdot (2 - \imath)$. There are similar shocking results for 13. But, roughly, the world of composites and prime numbers translates into Gaussian composites and Gaussian primes. In this slightly exotic structure we have everything familiar about factoring numbers.

You might have some nagging thoughts. Like, sure, two is equal to $(1 + \imath) \cdot (1 - \imath)$. But isn’t it also equal to $(1 + \imath) \cdot (1 - \imath) \cdot \imath \cdot (-\imath)$? One of the important things about prime numbers is that every composite number is the product of a unique string of prime numbers. Do we have to give that up for Gaussian integers?

Good nag. But no; the doubt is coming about because you’ve forgotten the difference between “the positive integers” and “all the integers”. If we stick to positive whole numbers then, yeah, (say) ten is equal to two times five and no other combination of prime numbers. But suppose we have all the integers, positive and negative. Then ten is equal to either two times five or it’s equal to negative two times negative five. Or, better, it’s equal to negative one times two times negative one times five. Or suffix times any even number of negative ones.

Remember that bit about separating ‘one’ out from the world of primes and composites? That’s because the number one screws up these unique factorizations. You can always toss in extra factors of one, to taste, without changing the product of something. If we have positive and negative integers to use, then negative one does almost the same trick. We can toss in any even number of extra negative ones without changing the product. This is why we separate “units” out of the numbers. They’re not part of the prime factorization of any numbers.

For the Gaussian integers there are four units. 1 and -1, $\imath$ and $-\imath$. They are neither primes nor composites, and we don’t worry about how they would otherwise multiply the number of factorizations we get.

But let me close with a neat, easy-to-understand puzzle. It’s called the moat-crossing problem. In the regular old integers it’s this: imagine that the prime numbers are islands in a dangerous sea. You start on the number ‘2’. Imagine you have a board that can be set down and safely crossed, then picked up to be put down again. Could you get from the start and go off to safety, which is infinitely far away? If your board is some, fixed, finite length?

No, you can’t. The problem amounts to how big the gap between one prime number and the next largest prime number can be. It turns out there’s no limit to that. That is, you give me a number, as small or as large as you like. I can find some prime number that’s more than your number less than its successor. There are infinitely large gaps between prime numbers.

Gaussian primes, though? Since a Gaussian prime might have nearest neighbors in any direction? Nobody knows. We know there are arbitrarily large gaps. Pick a moat size; we can (eventually) find a Gaussian prime that’s at least that far away from its nearest neighbors. But this does not say whether it’s impossible to get from the smallest Gaussian primes — $1 + \imath$ and its companions $-1 + \imath$ and on — infinitely far away. We know there’s a moat of width 6 separating the origin of things from infinity. We don’t know that there’s bigger ones.

You’re not going to solve this problem. Unless I have more brilliant readers than I know about; if I have ones who can solve this problem then I might be too intimidated to write anything more. But there is surely a pleasant pastime, maybe a charming game, to be made from this. Try finding the biggest possible moats around some set of Gaussian prime island.

Ellen Gethner, Stan Wagon, and Brian Wick’s A Stroll Through the Gaussian Primes describes this moat problem. It also sports some fine pictures of where the Gaussian primes are and what kinds of moats you can find. If you don’t follow the reasoning, you can still enjoy the illustrations.

## The Summer 2017 Mathematics A To Z: Benford's Law

Today’s entry in the Summer 2017 Mathematics A To Z is one for myself. I couldn’t post this any later.

# Benford’s Law.

My car’s odometer first read 9 on my final test drive before buying it, in June of 2009. It flipped over to 10 barely a minute after that, somewhere near Jersey Freeze ice cream parlor at what used to be the Freehold Traffic Circle. Ask a Central New Jersey person of sufficient vintage about that place. Its odometer read 90 miles sometime that weekend, I think while I was driving to The Book Garden on Route 537. Ask a Central New Jersey person of sufficient reading habits about that place. It’s still there. It flipped over to 100 sometime when I was driving back later that day.

The odometer read 900 about two months after that, probably while I was driving to work, as I had a longer commute in those days. It flipped over to 1000 a couple days after that. The odometer first read 9,000 miles sometime in spring of 2010 and I don’t remember what I was driving to for that. It flipped over from 9,999 to 10,000 miles several weeks later, as I pulled into the car dealership for its scheduled servicing. Yes, this kind of impressed the dealer that I got there exactly on the round number.

The odometer first read 90,000 in late August of last year, as I was driving to some competitive pinball event in western Michigan. It’s scheduled to flip over to 100,000 miles sometime this week as I get to the dealer for its scheduled maintenance. While cars have gotten to be much more reliable and durable than they used to be, the odometer will never flip over to 900,000 miles. At least I can’t imagine owning it long enough, at my rate of driving the past eight years, that this would ever happen. It’s hard to imagine living long enough for the car to reach 900,000 miles. Thursday or Friday it should flip over to 100,000 miles. The leading digit on the odometer will be 1 or, possibly, 2 for the rest of my association with it.

The point of this little autobiography is this observation. Imagine all the days that I have owned this car, from sometime in June 2009 to whatever day I sell, lose, or replace it. Pick one. What is the leading digit of my odometer on that day? It could be anything from 1 to 9. But it’s more likely to be 1 than it is 9. Right now it’s as likely to be any of the digits. But after this week the chance of ‘1’ being the leading digit will rise, and become quite more likely than that of ‘9’. And it’ll never lose that edge.

This is a reflection of Benford’s Law. It is named, as most mathematical things are, imperfectly. The law-namer was Frank Benford, a physicist, who in 1938 published a paper The Law Of Anomalous Numbers. It confirmed the observation of Simon Newcomb. Newcomb was a 19th century astronomer and mathematician of an exhausting number of observations and developments. Newcomb observed the logarithm tables that anyone who needed to compute referred to often. The earlier pages were more worn-out and dirty and damaged than the later pages. People worked with numbers that start with ‘1’ more than they did numbers starting with ‘2’. And more those that start ‘2’ than start ‘3’. More that start with ‘3’ than start with ‘4’. And on. Benford showed this was not some fluke of calculations. It turned up in bizarre collections of data. The surface areas of rivers. The populations of thousands of United States municipalities. Molecular weights. The digits that turned up in an issue of Reader’s Digest. There is a bias in the world toward numbers that start with ‘1’.

And this is, prima facie, crazy. How can the surface areas of rivers somehow prefer to be, say, 100-199 hectares instead of 500-599 hectares? A hundred is a human construct. (Indeed, it’s many human constructs.) That we think ten is an interesting number is an artefact of our society. To think that 100 is a nice round number and that, say, 81 or 144 are not is a cultural choice. Grant that the digits of street addresses of people listed in American Men of Science — one of Benford’s data sources — have some cultural bias. How can another of his sources, molecular weights, possibly?

The bias sneaks in subtly. Don’t they all? It lurks at the edge of the table of data. The table header, perhaps, where it says “River Name” and “Surface Area (sq km)”. Or at the bottom where it says “Length (miles)”. Or it’s never explicit, because I take for granted people know my car’s mileage is measured in miles.

What would be different in my introduction if my car were Canadian, and the odometer measured kilometers instead? … Well, I’d not have driven the 9th kilometer; someone else doing a test-drive would have. The 90th through 99th kilometers would have come a little earlier that first weekend. The 900th through 999th kilometers too. I would have passed the 99,999th kilometer years ago. In kilometers my car has been in the 100,000s for something like four years now. It’s less absurd that it could reach the 900,000th kilometer in my lifetime, but that still won’t happen.

What would be different is the precise dates about when my car reached its milestones, and the amount of days it spent in the 1’s and the 2’s and the 3’s and so on. But the proportions? What fraction of its days it spends with a 1 as the leading digit versus a 2 or a 5? … Well, that’s changed a little bit. There is some final mile, or kilometer, my car will ever register and it makes a little difference whether that’s 239,000 or 385,000. But it’s only a little difference. It’s the difference in how many times a tossed coin comes up heads on the first 1,000 flips versus the second 1,000 flips. They’ll be different numbers, but not that different.

What’s the difference between a mile and a kilometer? A mile is longer than a kilometer, but that’s it. They measure the same kinds of things. You can convert a measurement in miles to one in kilometers by multiplying by a constant. We could as well measure my car’s odometer in meters, or inches, or parsecs, or lengths of football fields. The difference is what number we multiply the original measurement by. We call this “scaling”.

Whatever we measure, in whatever unit we measure, has to have a leading digit of something. So it’s got to have some chance of starting out with a ‘1’, some chance of starting out with a ‘2’, some chance of starting out with a ‘3’, and so on. But that chance can’t depend on the scale. Measuring something in smaller or larger units doesn’t change the proportion of how often each leading digit is there.

These facts combine to imply that leading digits follow a logarithmic-scale law. The leading digit should be a ‘1’ something like 30 percent of the time. And a ‘2’ about 18 percent of the time. A ‘3’ about one-eighth of the time. And it decreases from there. ‘9’ gets to take the lead a meager 4.6 percent of the time.

Roughly. It’s not going to be so all the time. Measure the heights of humans in meters and there’ll be far more leading digits of ‘1’ than we should expect, as most people are between 1 and 2 meters tall. Measure them in feet and ‘5’ and ‘6’ take a great lead. The law works best when data can sprawl over many orders of magnitude. If we lived in a world where people could as easily be two inches as two hundred feet tall, Benford’s Law would make more accurate predictions about their heights. That something is a mathematical truth does not mean it’s independent of all reason.

For example, the reader thinking back some may be wondering: granted that atomic weights and river areas and populations carry units with them that create this distribution. How do street addresses, one of Benford’s observed sources, carry any unit? Well, street addresses are, at least in the United States custom, a loose measure of distance. The 100 block (for example) of a street is within one … block … from whatever the more important street or river crossing that street is. The 900 block is farther away.

This extends further. Block numbers are proxies for distance from the major cross feature. House numbers on the block are proxies for distance from the start of the block. We have a better chance to see street number 418 than 1418, to see 418 than 488, or to see 418 than to see 1488. We can look at Benford’s Law in the second and third and other minor digits of numbers. But we have to be more cautious. There is more room for variation and quirk events. A block-filling building in the downtown area can take whatever street number the owners think most auspicious. Smaller samples of anything are less predictable.

Nevertheless, Benford’s Law has become famous to forensic accountants the past several decades, if we allow the use of the word “famous” in this context. But its fame is thanks to the economists Hal Varian and Mark Nigrini. They observed that real-world financial data should be expected to follow this same distribution. If they don’t, then there might be something suspicious going on. This is not an ironclad rule. There might be good reasons for the discrepancy. If your work trips are always to the same location, and always for one week, and there’s one hotel it makes sense to stay at, and you always learn you’ll need to make the trips about one month ahead of time, of course the hotel bill will be roughly the same. Benford’s Law is a simple, rough tool, a way to decide what data to scrutinize for mischief. With this in mind I trust none of my readers will make the obvious leading-digit mistake when padding their expense accounts anymore.

Since I’ve done you that favor, anyone out there think they can pick me up at the dealer’s Thursday, maybe Friday? Thanks in advance.

## Reading the Comics, December 2, 2015: The Art Of Maths Edition

Bill Amend’s FoxTrot Classics for the 28th of November (originally run in 2004) depicts a “Christmas Card For Smart People”. It uses the familiar motif of “ability to do arithmetic” as denoting smartness. The key to the first word is remembering that mathematicians use the symbol ‘e’ to represent a number that’s just a little over 2.71828. We call the number ‘e’, or something ‘the base of the natural logarithm’. It turns up all over the place. If you have almost any quantity that grows or that shrinks at a speed proportional to how much there is, and describe how much of stuff there is over time, you’ll find an ‘e’. Leonhard Euler, who’s renowned for major advances in every field of mathematics, is also renowned for major advances in notation in physics, and he gave us ‘e’ for that number.

The key to the second word there is remembering from physics that force equals mass times acceleration. Therefore the force divided by the acceleration is …

And so that inspires this essay’s edition title. There are several comics in this selection that are about the symbols or the representations of mathematics, and that touch on the subject as a visual art.

Matt Janz’s Out of the Gene Pool for the 28th of November first ran the 26th of October, 2002. It would make for a good word problem, too, with a couple of levels: given the constraints of (a slightly looser) budget, how do they get the greatest number of cookies? Or if some cookies are better than others, how do they get the most enjoyment from their cookie purchase? Working out the greatest amount of enjoyment within a given cookie budget, with different qualities of cookies, can be a good introduction to optimization problems and how subtle they can be.

Bill Holbrook’s On The Fastrack for the 29th of November speaks in support of accounting. It’s a worthwhile message. It doesn’t get much respect, not from the general public, and not from typical mathematics department. The general public maybe thinks of accounting as not much more than a way companies nickel-and-dime them. If the mathematics departments I’ve associated with are fair representatives, accounting isn’t even thought of except by the assistant professor doing a seminar on financial mathematics. (And I’m not sure accounting gets mentioned there, since there’s exciting stuff about the Black-Scholes Equation and options markets to think about instead.) This despite that accounting is probably, by volume, the most used part of mathematics. Anyway, Holbrook’s strip probably won’t get the field a better reputation. But it has got some great illustrations of doing things with numbers. The folks in mathematics departments certainly have had days feeling like they’ve done each of these things.

Dave Coverly’s Speed Bump for the 30th of November is a compound interest joke. I admit I’ve told this sort of joke myself, proposing that the hour cut out of the day in spring when Daylight Saving Time starts comes back as a healthy hour and three minutes in autumn when it’s taken out of saving. If I can get the delivery right I might have someone going for that three minutes.

Mikael Wulff and Anders Morgenthaler’s Truth Facts for the 30th of November is a Venn diagram joke for breakfast. I would bet they’re kicking themselves for not making the intersection be the holes in the center.

Mark Anderson’s Andertoons for this week interests me. It uses a figure to try explaining how to relate gallon and quart an pint and other units relate to each other. I like it, but I’m embarrassed to say how long it took in my life to work out the relations between pints, quarts, gallons, and particularly whether the quart or the pint was the larger unit. I blame part of that on my never really having to mix a pint of something with a quart of something else, which ought to have sorted that out. Anyway, let’s always cherish good representations of information. Good representations organize information and relationships in ways that are easy to remember, or easy to reconstruct or extend.

John Graziano’s Ripley’s Believe It or Not for the 2nd of December tries to visualize how many ways there are to arrange a Rubik’s Cube. Counting off permutations of things by how many seconds it’d take to get through them all is a common game. The key to producing a staggering length of time is that it one billion seconds are nearly 32 years, and the number of combinations of things adds up really really fast. There’s over eight billion ways to draw seven letters in a row, after all, if every letter is equally likely and if you don’t limit yourself to real or even imaginable words. Rubik’s Cubes have a lot of potential arrangements. Graziano misspells Rubik, but I have to double-check and make sure I’ve got it right every time myself. I didn’t know that about the pigeons.

Charles Schulz’s Peanuts for the 2nd of December (originally run in 1968) has Peppermint Patty reflecting on the beauty of numbers. I don’t think it’s unusual to find some numbers particularly pleasant and others not. Some numbers are easy to work with; if I’m trying to add up a set of numbers and I have a 3, I look instinctively for a 7 because of how nice 10 is. If I’m trying to multiply numbers, I’d so like to multiply by a 5 or a 25 than by a 7 or an 18. Typically, people find they do better on addition and multiplication with lower numbers like two and three, and get shaky with sevens and eights and such. It may be quirky. My love is a wizard with 7’s, but can’t do a thing with 8. But it’s no more irrational than the way a person might a pyramid attractive but a sphere boring and a stellated icosahedron ugly.

I’ve seen some comments suggesting that Peppermint Patty is talking about numerals, that is, the way we represent numbers. That she might find the shape of the 2 gentle, while 5 looks hostile. (I can imagine turning a 5 into a drawing of a shouting person with a few pencil strokes.) But she doesn’t seem to say one way or another. She might see a page of numbers as visual art; she might see them as wonderful things with which to play.

## Missed A Mile

I’m honestly annoyed with myself. It’s only a little annoyed, though. I didn’t notice when I made my 5,280th tweet on @Nebusj. It’s one of those numbers — the count of feet in a mile — that fascinated the young me. It seemed to come from nowhere — why not 5,300? Why not 5,250? Heck, why not 5,000? — and the most I heard about why it was that was that 5,280 was equal to eight furlongs. What’s a furlong, I might wonder? 5,280 divided by eight is 660, which doesn’t clear things up much.

Yes, yes, I know now why it’s 5,280. It was me at age seven that couldn’t sort out why this rather than that.

But what a number. It had that compelling mix of precision and mystery. And so divisible! When you’ve learned how to do division and think it’s fun, a number like 5,280 with so many divisors is a joy. There’s 48 of them, all told. All the numbers you see on a times table except for 7 and 9 go into it. It’s practically teasing the mathematically-inclined kid to find all these factors. 5,279 and 5,281 are mere primes; 5,278 and 5,282 aren’t nearly so divisor-rich as 5,280. Even 1,760, which I knew well as the number of yards in a mile, isn’t so interesting. And compared to piddling little numbers like 12 or 144 — well!

5,280 is not why I’m a mathematician. I credit a Berenstain Bears book that clearly illustrated what mathematicians do is “add up sums in an observatory on the moon”. But 5,280 is one of those sparkling lights that attracted me to the subject. I imagine having something like this, accessible but mysterious, is key to getting someone hooked on a field. And while I agree the metric system is best for most applications, it’s also true 1,000 isn’t so interesting a number to stare at. You can find plenty of factors of it, but they’ll all follow too-easy patterns. You won’t see a surprising number like 55 or 352 or 1,056 or 1,320 among them.

So, I’m sorry to miss an interesting number like that for my 5,280th tweet. I hope I remember to make some fuss for my 5,280th blog post.

## A Hundred, And Other Things

The other day my humor blog featured a little table of things for which a “hundred” of them isn’t necessarily 100 of them. It’s just a little bit of wonder I found on skimming the “Index to Units and Systems of Units” page, one of those simple reference sites that is just compelling in how much trivia there is to enjoy. The page offers examples of various units, from those that are common today (acres, meters, gallons), to those of local or historic use (the grosses tausend, the farthingdale), to those of specialized application (the seger cone, used by potters to measure the maximum temperature of kiln). It’s just a wonder of things that can be measured.

There’s a wonderful diversity of commodities for which a “hundred” is not 100 units, though. Many — skins, nails, eggs, herring — have a “hundred” that consists of 120. That seems to defy the definition of a “hundred”, but I like to think that serves as a reminder that units are creations of humans to organize the way we think about things, and it’s convenient to have a unit that is “an awful lot of, but not unimaginably lot of” whatever we’re talking about, and a “hundred” seems to serve that role pretty well. The “hundreds” which are actually 120 probably come about from wanting to have a count of things that’s both an awful lot of the thing and is also an amount that can be subdivided into equal parts very well. 120 of a thing can be divided evenly into two, three, four, five, six, eight, ten, twelve, and so on equal shares; 100 is relatively impoverished for equal subdivisions.

I do not know the story behind some of the more curious hundreds, such as the counting of 106 sheep or lambs as a hundred in Roxburghshire and Selkirkshire (counties in the southeast of Scotland), or the counting of 160 dried fish as a hundred, but it likely reflects the people working with such things finding these to be slightly more convenient numbers than a plain old 100 for the “big but not unimaginably lot of” a thing. The 225 making up a hundred of onions and garlic, for example, seems particularly exotic, but it’s less so when you notice that’s 15 times 15. One of the citations of this “hundred” describes it as “15 ropes and every rope each with 15 heads”. Suddenly this hundred is a reasonable number of things that are themselves reasonable numbers of things.

Of course if they hadn’t called it a “hundred” then I wouldn’t have had a pretty easy comic bit to build from it, but how were they to know the meaning of “hundred” in everyday speech would settle down to an unimaginative solitary value?