My All 2020 Mathematics A to Z: Statistics


I owe Mr Wu, author of the Singapore Maths Tuition blog, thanks for another topic for this A-to-Z. Statistics is a big field of mathematics, and so I won’t try to give you a course’s worth in 1500 words. But I have to start with a question. I seem to have ended at two thousand words.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

Statistics.

Is statistics mathematics?

The answer seems obvious at first. Look at a statistics textbook. It’s full of algebra. And graphs of great sloped mounds. There’s tables full of four-digit numbers in back. The first couple chapters are about probability. They’re full of questions about rolling dice and dealing cards and guessing whether the sibling who just entered is the younger.

But then, why does Rutgers University have a Department of Mathematics and also a Department of Statistics? And considered so distinct as to have an interdisciplinary mathematics-and-statistics track? It’s not an idiosyncrasy of Rutgers. Many schools have the same division between mathematics and statistics. Some join them into a Department of Mathematics and Statistics. But the name hints at something just different about the field. Not too different, though. Physics and Chemistry and important threads of Economics and History are full of mathematics. But you never see a Department of Mathematics and History.

Thinking of the field’s history, though, and its use, tell us more. Some of the earliest work we now recognize as statistics was Arab mathematicians deciphering messages. This cryptanalysis is the observation that (in English) a three-letter word is very likely to be ‘the’, mildly likely to be ‘one’, and not likely to be ‘pyx’. A more modern forerunner is the Republic of Venice supposedly calculating that war with Milan would not be worth the winning. Or the gatherings of mortality tables, recording how many people of what age can be expected to die any year, and what from. (Mortality tables are another of Edmond Halley’s claims to fame, though it won’t displace his comet work.) Florence Nightingale’s charts explaining how more soldiers die of disease than in fighting the Crimean War. William Sealy Gosset sharing sample-testing methods developed at the Guinness brewery.

You see a difference in kind to a mathematical question like finding a square with the same area as this trapezoid. It’s not that mathematics is not practical; it’s always been. And it’s not that statistics lacks abstraction and pure mathematics content. But statistics wears practicality in a way that number theory won’t.

Practical about what? History and etymology tip us off. The early uses of things we now see as statistics are about things of interest to the State. Decoding messages. Counting the population. Following — in the study of annuities — the flow of money between peoples. With the industrial revolution, statistics sneaks into the factory. To have an economy of scale you need a reliable product. How do you know whether the product is reliable, without testing every piece? How can you test every beer brewed without drinking it all?

One great leg of statistics — it’s tempting to call it the first leg, but the history is not so neat as to make that work — is descriptive. This gives us things like mean and median and mode and standard deviation and quartiles and quintiles. These try to let us represent more data than we can really understand in a few words. We lose information in doing so. But if we are careful to remember the difference between the descriptive statistics we have and the original population? (nb, a word of the State) We might not do ourselves much harm.

Another great leg is inferential statistics. This uses tools with names like z-score and the Student t distribution. And talk about things like p-values and confidence intervals. Terms like correlation and regression and such. This is about looking for causes in complex scenarios. We want to believe there is a cause to, say, a person’s lung cancer. But there is no tracking down what that is; there are too many things that could start a cancer, and too many of them will go unobserved. But we can notice that people who smoke have lung cancer more often than those who don’t. We can’t say why a person recovered from the influenza in five days. But we can say people who were vaccinated got fewer influenzas, and ones that passed quicker, than those who did not. We can get the dire warning that “correlation is not causation”, uttered by people who don’t like what the correlation suggests may be a cause.

Also by people being honest, though. In the 1980s geologists wondered if the sun might have a not-yet-noticed companion star. Its orbit would explain an apparent periodicity in meteor bombardments of the Earth. But completely random bombardments would produce apparent periodicity sometimes. It’s much the same way trees in a forest will sometimes seem to line up. Or imagine finding there is a neighborhood in your city with a high number of arrests. Is this because it has the highest rate of street crime? Or is the rate of street crime the same as any other spot and there are simply more cops here? But then why are there more cops to be found here? Perhaps they’re attracted by the neighborhood’s reputation for high crime. It is difficult to see through randomness, to untangle complex causes, and to root out biases.

The tools of statistics, as we recognize them, largely came together in the 19th and early 20th century. Adolphe Quetelet, a Flemish scientist, set out much early work, including introducing the concept of the “average man”. He studied the crime statistics of Paris for five years and noticed how regular the numbers were. The implication, to Quetelet — who introduced the idea of the “average man”, representative of societal matters — was that crime is a societal problem. It’s something we can control by mindfully organizing society, without infringing anyone’s autonomy. Put like that, the study of statistics seems an obvious and indisputable good, a way for governments to better serve their public.

So here is the dispute. It’s something mathematicians understate when sharing the stories of important pioneers like Francis Galton or Karl Pearson. They were eugenicists. Part of what drove their interest in studying human populations was to find out which populations were the best. And how to help them overcome their more-populous lessers.

I don’t have the space, or depth of knowledge, to fully recount the 19th century’s racial politics, popular scientific understanding, and international relations. Please accept this as a loose cartoon of the situation. Do not forget the full story is more complex and more ambiguous than I write.

One of the 19th century’s greatest scientific discoveries was evolution. That populations change in time, in size and in characteristics, even budding off new species, is breathtaking. Another of the great discoveries was entropy. This incorporated into science the nostalgic romantic notion that things used to be better. I write that figuratively, but to express the way the notion is felt.

There are implications. If the Sun itself will someday wear out, how long can the Tories last? It was easy for the aristocracy to feel that everything was quite excellent as it was now and dread the inevitable change. This is true for the aristocracy of any country, although the United Kingdom had a special position here. The United Kingdom enjoyed a privileged position among the Great Powers and the Imperial Powers through the 19th century. Note we still call it the Victorian era, when Louis Napoleon or Giuseppe Garibaldi or Otto von Bismarck are more significant European figures. (Granting Victoria had the longer presence on the world stage; “the 19th century” had a longer presence still.) But it could rarely feel secure, always aware that France or Germany or Russia was ready to displace it.

And even internally: if Darwin was right and reproductive success all that matters in the long run, what does it say that so many poor people breed so much? How long could the world hold good things? Would the eternal famines and poverty of the “overpopulated” Irish or Indian colonial populations become all that was left? During the Crimean War, the British military found a shocking number of recruits from the cities were physically unfit for service. In the 1850s this was only an inconvenience; there were plenty of strong young farm workers to recruit. But the British population was already majority-urban, and becoming more so. What would happen by 1880? 1910?

One can follow the reasoning, even if we freeze at the racist conclusions. And we have the advantage of a century-plus hindsight. We can see how the eugenic attitude leads quickly to horrors. And also that it turns out “overpopulated” Ireland and India stopped having famines once they evicted their colonizers.

Does this origin of statistics matter? The utility of a hammer does not depend on the moral standing of its maker. The Central Limit Theorem has an even stronger pretense to objectivity. Why not build as best we can with the crooked timbers of mathematics?

It is in my lifetime that a popular racist book claimed science proved that Black people were intellectual inferiors to White people. This on the basis of supposedly significant differences in the populations’ IQ scores. It proposed that racism wasn’t a thing, or at least nothing to do anything about. It would be mere “realism”. Intelligence Quotients, incidentally, are another idea we can trace to Francis Galton. But an IQ test is not objective. The best we can say is it might be standardized. This says nothing about the biases built into the test, though, or of the people evaluating the results.

So what if some publisher 25 years ago got suckered into publishing a bad book? And racist chumps bought it because they liked its conclusion?

The past is never fully past. In the modern environment of surveillance capitalism we have abundant data on any person. We have abundant computing power. We can find many correlations. This gives people wild ideas for “artificial intelligence”. Something to make predictions. Who will lose a job soon? Who will get sick, and from what? Who will commit a crime? Who will fail their A-levels? At least, who is most likely to?

These seem like answerable questions. One can imagine an algorithm that would answer them fairly. And make for a better world, one which concentrates support around the people most likely to need it. If we were wise, we would ask our friends in the philosophy department about how to do this. Or we might just plunge ahead and trust that since an algorithm runs automatically it must be fair. Our friends in the philosophy department might have some advice there too.

Consider, for example, the body mass index. It was developed by our friend Adolphe Quetelet, as he tried to understand the kinds of bodies in the population. It is now used to judge whether someone is overweight. Weight is treated as though it were a greater threat to health than actual illnesses are. Your diagnosis for the same condition with the same symptoms will be different — and on average worse — if your number says 25.2 rather than 24.8.

We must do better. We can hope that learning how tools were used to injure people will teach us to use them better, to reduce or to avoid harm. We must fight our tendency to latch on to simple ideas as the things we can understand in the world. We must not mistake the greater understanding we have from the statistics for complete understanding. To do this we must have empathy, and we must have humility, and we must understand what we have done badly in the past. We must catch ourselves when we repeat the patterns that brought us to past evils. We must do more than only calculate.


This and the rest of the 2020 A-to-Z essays should be at this link. All the essays from every A-to-Z series should be gathered at this link. And I am looking for V, W, and X topics to write about. Thanks for your thoughts, and thank you for reading.

My All 2020 Mathematics A to Z: John von Neumann


Mr Wu, author of the Singapore Maths Tuition blog, suggested another biographical sketch for this year of biographies. Once again it’s of a person too complicated to capture in full in one piece, even at the length I’ve been writing. So I take a slice out of John von Neumann’s life here.

Color cartoon illustration of a coati in a beret and neckerchief, holding up a director's megaphone and looking over the Hollywood hills. The megaphone has the symbols + x (division obelus) and = on it. The Hollywood sign is, instead, the letters MATHEMATICS. In the background are spotlights, with several of them crossing so as to make the letters A and Z; one leg of the spotlights has 'TO' in it, so the art reads out, subtly, 'Mathematics A to Z'.
Art by Thomas K Dye, creator of the web comics Projection Edge, Newshounds, Infinity Refugees, and Something Happens. He’s on Twitter as @projectionedge. You can get to read Projection Edge six months early by subscribing to his Patreon.

John von Neumann.

In March 1919 the Hungarian People’s Republic, strained by Austria-Hungary’s loss in the Great War, collapsed. The Hungarian Soviet Republic, the world’s second Communist state, replaced it. It was a bad time to be a wealthy family in Budapest. The Hungarian Soviet lasted only a few months. It was crushed by the internal tension between city and countryside. By poorly-fought wars to restore the country’s pre-1914 borders. By the hostility of the Allied Powers. After the Communist leadership fled came a new Republic, and a pogrom. Europeans are never shy about finding reasons to persecute Jewish people. It was a bad time to be a Jewish family in Budapest.

Von Neumann was born to a wealthy, (non-observant) Jewish family in Budapest, in 1903. He acquired the honorific “von” in 1913. His father Max Neumann was honored for service to the Austro-Hungarian Empire and paid for a hereditary appellation.

It is, once again, difficult to encompass von Neumann’s work, and genius, in one piece. He was recognized as genius early. By 1923 he published a logical construction for the counting numbers that’s still the modern default. His 1926 doctoral thesis was in set theory. He was invited to lecture on quantum theory at Princeton by 1929. He was one of the initial six mathematics professors at the Institute for Advanced Study. We have a thing called von Neumann algebras after his work. He gave the first rigorous proof of an ergodic theorem. He partly solved one of Hilbert’s problems. He studied non-linear partial differential equations. He was one of the inventors of the electronic computer as we know it, both the theoretical and the practical ideas.

And, the sliver I choose to focus on today, he made game theory into a coherent field.

The term “game theory” makes it sound like a trifle. We don’t call “genius” anyone who comes up with a better way to play tic-tac-toe. The utility of the subject appears when we notice what von Neumann thought he was writing about. Von Neumann’s first paper on this came in 1928. In 1944 he with Oskar Morgenstern published the textbook Theory Of Games And Economic Behavior. In Chapter 1, Section 1, they set their goals:

The purpose of this book is to present a discussion of some fundamental questions of economic theory which require a treatment different from that which they have found thus far in the literature. The analysis is concerned with some basic problems arising from a study of economic behavior which have been the center of attention of economists for a long time. They have their origin in the attempts to find an exact description of the endeavor of the individual to obtain a maximum of utility, or in the case of the entrepreneur, a maximum of profit.

Somewhere along the line von Neumann became interested in how economics worked. Perhaps because his family had money. Perhaps because he saw how one could model an “ideal” growing economy — matching price and production and demand — as a linear programming question. Perhaps because economics is a big, complicated field with many unanswered questions. There was, for example, little good idea of how attendees at an auction should behave. What is the rational way to bid, to get the best chances of getting the things one wants at the cheapest price?

In 1928, von Neumann abstracted all sorts of economic questions into a basic model. The model has almost no features, so very many games look like it. In this, you have a goal, and a set of options for what to do, and an opponent, who also has options of what to do. Also some rounds to achieve your goal. You see how this abstract a structure describes many things one could do, from playing Risk to playing the stock market.

And von Neumann discovered that, in the right circumstances, you can find a rational way to bid at an auction. Or, at least, to get your best possible outcome whatever the other person does. The proof has the in-retrospect obviousness of brilliance. von Neumann used a fixed-point theorem. Fixed point theorems came to mathematics from thinking of functions as mappings. Functions match elements in a set called the domain to those in a set called the range. The function maps the domain into the range. If the range is also the domain? Then we can do an iterated mapping. Under the right circumstances, there’s at least one point that maps to itself.

In the light of game theory, a function is the taking of a turn. The domain and the range are the states of whatever’s in play. In this type of game, you know all the options everyone has. You know the state of the game. You know what the past moves have all been. You know what you and your opponent hope to achieve. So you can predict your opponent’s strategy. And therefore pick a strategy that gets you the best option available given your opponent is trying to do the same. So will your opponent. So you both end up with the best attainable outcome for the both of you; this is the minimax theorem.

It may strike you that, given this, the game doesn’t need to be played anymore. Just pick your strategy, let your opponent pick one, and the winner is determined. So it would, if we played our strategies perfectly, and if we didn’t change strategies mid-game. I would chuckle at the mathematical view that we study a game to relieve ourselves of the burden of playing. But I know how many grand strategy video games I have that I never have time to play.

After this 1928 paper von Neumann went on to other topics for about a dozen years. Why create a field of mathematics and then do nothing with it? For one, we see it as a gap only because we are extracting, after the fact, this thread of his life. He had other work, particularly in quantum mechanics, operators, measure theory, and lattice theory. He surely did not see himself abandoning a new field. He saw, having found an interesting result, new interesting questions..

But Philip Mirowski’s 1992 paper What Were von Neumann and Morgenstern Trying to Accomplish? points out some context. In September 1930 Kurt Gödel announced his incompleteness proof. Any logical system complex enough has things which are true and can’t be proven. The system doesn’t have to be that complex. Mathematical rigor must depend on something outside mathematics. This shook von Neumann. He would say that after Gödel published, von Neumann never bothered reading another paper on symbolic logic. Mirowski believes this drove von Neumann into what we now call artificial intelligence. At least, into mathematics that draws from empirical phenomena. von Neumann needed time to recover from the shock. And needed the prodding of Morgenstern to return to economics.

After publishing Theory Of Games And Economic Behavior the book … well, Mirowski calls it more “cited in reverence than actually read”. But game theory, as a concept? That took off. It seemed to offer a way to rationalize the world.

von Neumann would become a powerful public intellectual. He would join the Manhattan Project. He showed that the atomic bomb would be more destructive if it exploded kilometers above the ground, rather than at ground level. He was on the target selection committee which, ultimately, slated Hiroshima and Nagasaki for mass murder. He would become a consultant for the Weapons System Evaluation Group. They advised the United States Joint Chiefs of Staff on developing and using new war technology. He described himself, to a Senate committee, as “violently anti-communist and much more militaristic than the norm”. He is quoted in 1950 as remarking, “if you say why not bomb [ the Soviets ] tomorrow, I say, why not today? If you say today at five o’clock, I say why not one o’clock?”

The quote sounds horrifying. It makes game-theory sense, though. If war is inevitable, it is better fought when your opponent is weaker. And while the Soviet Union had won World War II, it was also ruined in the effort.

There is another game-theory-inspired horror for which we credit von Neumann. This is Mutual Assured Destruction. If any use of an atomic, or nuclear, weapon would destroy the instigator in retaliation, then no one would instigate war. So the nuclear powers need, not just nuclear arsenals. They need such vast arsenals that the remnant which survives the first strike can destroy the other powers in the second strike.

Perhaps the reasoning holds together. We did reach the destruction of the Soviet Union without using another atomic weapon in anger. But it is hard to say that was rationally accomplished. There were at least two points, in 1962 and in 1983, when a world-ruining war could too easily have happened, by people following the “obvious” strategy.

Which brings a flaw of game theory, at least as applied to something as complicated as grand strategy. Game theory demands the rules be known, and agreed on. (At least that there is a way of settling rule disputes.) It demands we have the relevant information known truthfully. It demands we know what our actual goals are. It demands that we act rationally, and that our opponent acts rationally. It demands that we agree on what rational is. (Think of, in Doctor Strangelove, the Soviet choice to delay announcing its doomsday machine’s completion.) Few of these conditions obtain in grand strategy. They barely obtain in grand strategy games. von Neumann was aware of at least some of these limitations, though he did not live long enough to address them. He died of either bone, pancreatic, or prostate cancer, likely caused by radiation exposure working at Los Alamos.

Game theory has been, and is, a great tool in many fields. It gives us insight into human interactions. It does good work in economics, in biology, in computer science, in management. But we can come to very bad conditions when we forget the difference between the game we play and the game we modelled. And if we forget that the game is value-indifferent. The theory makes no judgements about the ethical nature of the goal. It can’t, any more than the quadratic equation can tell us whether ‘x’ is which fielder will catch the fly ball or which person will be killed by a cannonball.

It makes an interesting parallel to the 19th century’s greatest fusion of mathematics and economics. This was utilitarianism, one famous attempt to bring scientific inquiry to the study of how society should be set up. Utilitarianism offers exciting insights into, say, how to allocate public services. But it struggles to explain why we should refrain from murdering someone whose death would be convenient. We need a reason besides the maximizing of utility.

No war is inevitable. One comes about only after many choices. Some are grand choices, such as a head of government issuing an ultimatum. Some are petty choices, such as the many people who enlist as the sergeants that make an army exist. We like to think we choose rationally. Psychological experiments, and experience, and introspection tell us we more often choose and then rationalize.

von Neumann was a young man, not yet in college, during the short life of the Hungarian Soviet Republic, and the White Terror that followed. I do not know his biography well enough to say how that experience motivated his life’s reasoning. I would not want to say that 1919 explained it all. The logic of a life is messier than that. I bring it up in part to fight the tendency of online biographic sketches to write as though he popped into existence, calculated a while, inspired a few jokes, and vanished. And to reiterate that even mathematics never exists without context. Even what seem to be pure questions on an abstract idea of a game is often inspired by a practical question. And that work is always done in a context that affects how we evaluate it.


Thank you all for reading. This grew a bit more serious than I had anticipated. This and all the other 2020 A-to-Z essays should appear at this link. Both the 2020 and all past A-to-Z essays should be at this link.

I am hosting the Playful Math Education Blog Carnival at the end of September, so appreciate any educational or recreational or fun mathematics material you know about. I’m hoping to publish next week and so hope that you can help me this week.

And, finally, I am open for mathematics topics starting with P, Q, and R to write about next month. I should be writing about them this month and getting ahead of deadline, but that seems not to be happening.

Checking Back in On That 117-Year-Old Roller Coaster


I apologize to people who want to know the most they can about the comic strips of the past week. I’ve not had time to write about them. Part of what has kept me busy is a visit to Lakemont Park, in Altoona, Pennsylvania. The park has had several bad years, including two years in which it did not open at all. But still standing at the park is the oldest-known roller coaster, Leap The Dips.

My first visit to this park, in 2013, among other things gave me a mathematical question to ask. That is, could any of the many pieces of wood in it be original? How many pieces would you expect?

Two parts of the white-painted-wood roller coaster track. In front is the diagonal lift hill. Behind is a basically horizontal track which has a small dip in the middle.
One of the dips of Leap The Dips. These hills are not large ones. The biggest drop is about nine feet; the coaster is a total of 41 feet high at its greatest. The track goes back and forth in a figure-eight layout several times, and in the middle of each ‘straightaway’ leg is a dip like this.

Problems of this form happen all the time. They turn up whenever there’s something which has a small chance of happening, but many chances to happen. In this case, there’s a small chance that any particular piece of wood will need replacing. But there are a lot of pieces of wood, and they might need replacement at any ride inspection. So there’s an obvious answer to how likely it is any piece of wood would survive a century-plus. And, from that, how much of that wood should be original.

And, since this is a probability question, I found reasons not to believe in this answer. These reasons amount to my doubting that the reality is much like the mathematical abstraction. I even found evidence that my doubts were correct.

Covered station for the roller coaster, with 'LEAP THE DIPS' written in what looks like a hand-painted sign hanging from above. Two roller coaster chairs sit by the station.
The station for the Leap The Dips roller coaster, Lakemont Park, Altoona, Pennsylvania. There are two separate cars visible on the tracks by the station. When I last visited there was only one car on the tracks. The cars have a front and a back seat, and while there is a bar to grab hold of, there are no other restraints, which makes the low-speed ride more exciting.

The sad thing to say about revisiting Lakemont Park — well, one is that the park has lost almost all its amusement park rides. It’s got athletic facilities, and a couple miniature golf courses, but besides two wooden and one kiddie roller coaster, and an antique-cars ride, there’s not much left of its long history as an amusement park. But the other thing is that Leap The Dips was closed when I was able to visit. The ride’s under repairs, and seems to be getting painted too. This is sad, but I hope it implies better things soon.

The Summer 2017 Mathematics A To Z: Arithmetic


And now as summer (United States edition) reaches its closing months I plunge into the fourth of my A To Z mathematics-glossary sequences. I hope I know what I’m doing! Today’s request is one of several from Gaurish, who’s got to be my top requester for mathematical terms and whom I thank for it. It’s a lot easier writing these things when I don’t have to think up topics. Gaurish hosts a fine blog, For the love of Mathematics, which you might consider reading.

Summer 2017 Mathematics A to Z, featuring a coati (it's kind of the Latin American raccoon) looking over alphabet blocks, with a lot of equations in the background.
Art courtesy of Thomas K Dye, creator of the web comic Newshounds. He has a Patreon for those able to support his work. He’s also open for commissions, starting from US$10.

Arithmetic.

Arithmetic is what people who aren’t mathematicians figure mathematicians do all day. I remember in my childhood a Berenstain Bears book about people’s jobs. Its mathematician was an adorable little bear adding up sums on the chalkboard, in an observatory, on the Moon. I liked every part of this. I wouldn’t say it’s the whole reason I became a mathematician but it did made the prospect look good early on.

People who aren’t mathematicians are right. At least, the bulk of what mathematics people do is arithmetic. If we work by volume. Arithmetic is about the calculations we do to evaluate or solve polynomials. And polynomials are everything that humans find interesting. Arithmetic is adding and subtracting, of multiplication and division, of taking powers and taking roots. Arithmetic is changing the units of a thing, and of breaking something into several smaller units, or of merging several smaller units into one big one. Arithmetic’s role in commerce and in finance must overwhelm the higher mathematics. Higher mathematics offers cohomologies and Ricci tensors. Arithmetic offers a budget.

This is old mathematics. There’s evidence of humans twenty thousands of years ago recording their arithmetic computations. My understanding is the evidence is ambiguous and interpretations vary. This seems fair. I assume that humans did such arithmetic then, granting that I do not know how to interpret archeological evidence. The thing is that arithmetic is older than humans. Animals are able to count, to do addition and subtraction, perhaps to do harder computations. (I crib this from The Number Sense:
How the Mind Creates Mathematics
, by Stanislas Daehaene.) We learn it first, refining our rough instinctively developed sense to something rigorous. At least we learn it at the same time we learn geometry, the other branch of mathematics that must predate human existence.

The primality of arithmetic governs how it becomes an adjective. We will have, for example, the “arithmetic progression” of terms in a sequence. This is a sequence of numbers such as 1, 3, 5, 7, 9, and so on. Or 4, 9, 14, 19, 24, 29, and so on. The difference between one term and its successor is the same as the difference between the predecessor and this term. Or we speak of the “arithmetic mean”. This is the one found by adding together all the numbers of a sample and dividing by the number of terms in the sample. These are important concepts, useful concepts. They are among the first concepts we have when we think of a thing. Their familiarity makes them easy tools to overlook.

Consider the Fundamental Theorem of Arithmetic. There are many Fundamental Theorems; that of Algebra guarantees us the number of roots of a polynomial equation. That of Calculus guarantees us that derivatives and integrals are joined concepts. The Fundamental Theorem of Arithmetic tells us that every whole number greater than one is equal to one and only one product of prime numbers. If a number is equal to (say) two times two times thirteen times nineteen, it cannot also be equal to (say) five times eleven times seventeen. This may seem uncontroversial. The budding mathematician will convince herself it’s so by trying to work out all the ways to write 60 as the product of prime numbers. It’s hard to imagine mathematics for which it isn’t true.

But it needn’t be true. As we study why arithmetic works we discover many strange things. This mathematics that we know even without learning is sophisticated. To build a logical justification for it requires a theory of sets and hundreds of pages of tight reasoning. Or a theory of categories and I don’t even know how much reasoning. The thing that is obvious from putting a couple objects on a table and then a couple more is hard to prove.

As we continue studying arithmetic we start to ponder things like Goldbach’s Conjecture, about even numbers (other than two) being the sum of exactly two prime numbers. This brings us into number theory, a land of fascinating problems. Many of them are so accessible you could pose them to a person while waiting in a fast-food line. This befits a field that grows out of such simple stuff. Many of those are so hard to answer that no person knows whether they are true, or are false, or are even answerable.

And it splits off other ideas. Arithmetic starts, at least, with the counting numbers. It moves into the whole numbers and soon all the integers. With division we soon get rational numbers. With roots we soon get certain irrational numbers. A close study of this implies there must be irrational numbers that must exist, at least as much as “four” exists. Yet they can’t be reached by studying polynomials. Not polynomials that don’t already use these exotic irrational numbers. These are transcendental numbers. If we were to say the transcendental numbers were the only real numbers we would be making only a very slight mistake. We learn they exist by thinking long enough and deep enough about arithmetic to realize there must be more there than we realized.

Thought compounds thought. The integers and the rational numbers and the real numbers have a structure. They interact in certain ways. We can look for things that are not numbers, but which follow rules like that for addition and for multiplication. Sometimes even for powers and for roots. Some of these can be strange: polynomials themselves, for example, follow rules like those of arithmetic. Matrices, which we can represent as grids of numbers, can have powers and even something like roots. Arithmetic is inspiration to finding mathematical structures that look little like our arithmetic. We can find things that follow mathematical operations but which don’t have a Fundamental Theorem of Arithmetic.

And there are more related ideas. These are often very useful. There’s modular arithmetic, in which we adjust the rules of addition and multiplication so that we can work with a finite set of numbers. There’s floating point arithmetic, in which we set machines to do our calculations. These calculations are no longer precise. But they are fast, and reliable, and that is often what we need.

So arithmetic is what people who aren’t mathematicians figure mathematicians do all day. And they are mistaken, but not by much. Arithmetic gives us an idea of what mathematics we can hope to understand. So it structures the way we think about mathematics.

48 Altered States


I saw this intriguing map produced by Brian Brettschneider.

He made it on and for Twitter, as best I can determine. I found it from a stray post in Usenet newsgroup soc.history.what-if, dedicated to ways history could have gone otherwise. It also covers ways that it could not possibly have gone otherwise but would be interesting to see happen. Very different United States state boundaries are part of the latter set of things.

The location of these boundaries is described in English and so comes out a little confusing. It’s hard to make concise. Every point in, say, this alternate Missouri is closer to Missouri’s capital of … uhm … Missouri City than it is to any other state’s capital. And the same for all the other states. All you kind readers who made it through my recent A To Z know a technical term for this. This is a Voronoi Diagram. It uses as its basis points the capitals of the (contiguous) United States.

It’s an amusing map. I mean amusing to people who can attach concepts like amusement to maps. It’d probably be a good one to use if someone needed to make a Risk-style grand strategy game map and didn’t want to be to beholden to the actual map.

No state comes out unchanged, although a few don’t come out too bad. Maine is nearly unchanged. Michigan isn’t changed beyond recognition. Florida gets a little weirder but if you showed someone this alternate shape they’d recognize the original. No such luck with alternate Tennessee or alternate Wyoming.

The connectivity between states changes a little. California and Arizona lose their border. Washington and Montana gain one; similarly, Vermont and Maine suddenly become neighbors. The “Four Corners” spot where Utah, Colorado, New Mexico, and Arizona converge is gone. Two new ones look like they appear, between New Hampshire, Massachusetts, Rhode Island, and Connecticut; and between Pennsylvania, Maryland, Virginia, and West Virginia. I would be stunned if that weren’t just because we can’t zoom far enough in on the map to see they’re actually a pair of nearby three-way junctions.

I’m impressed by the number of borders that are nearly intact, like those of Missouri or Washington. After all, many actual state boundaries are geographic features like rivers that a Voronoi Diagram doesn’t notice. How could Ohio come out looking anything like Ohio?

The reason comes to historical subtleties. At least once you get past the original 13 states, basically the east coast of the United States. The boundaries of those states were set by colonial charters, with boundaries set based on little or ambiguous information about what the local terrain was actually like, and drawn to reward or punish court factions and favorites. Never mind the original thirteen (plus Maine and Vermont, which we might as well consider part of the original thirteen).

After that, though, the United States started drawing state boundaries and had some method to it all. Generally a chunk of territory would be split into territories and later states that would be roughly rectangular, so far as practical, and roughly similar in size to the other states carved of the same area. So for example Missouri and Alabama are roughly similar to Georgia in size and even shape. Louisiana, Arkansas, and Missouri are about equal in north-south span and loosely similar east-to-west. Kansas, Nebraska, South Dakota, and North Dakota aren’t too different in their north-to-south or east-to-west spans.

There’s exceptions, for reasons tied to the complexities of history. California and Texas get peculiar shapes because they could. Michigan has an upper peninsula for quirky reasons that some friend of mine on Twitter discovers every three weeks or so. But the rough guide is that states look a lot more similar to one another than you’d think from a quick look. Mark Stein’s How The States Got Their Shapes is an endlessly fascinating text explaining this all.

If there is a loose logic to state boundaries, though, what about state capitals? Those are more quirky. One starts to see the patterns when considering questions like “why put California’s capital in Sacramento instead of, like, San Francisco?” or “Why Saint Joseph instead Saint Louis or Kansas City?” There is no universal guide, but there are some trends. Generally states end up putting their capitals in a city that’s relatively central, at least to the major population centers around the time of statehood. And, generally, not in one of the state’s big commercial or industrial centers. The desire to be geographically central is easy to understand. No fair making citizens trudge that far if they have business in the capital. Avoiding the (pardon) first tier of cities has subtler politics to it; it’s an attempt to get the government somewhere at least a little inconvenient to the money powers.

There’s exceptions, of course. Boston is the obviously important city in Massachusetts, Salt Lake City the place of interest for Utah, Denver the equivalent for Colorado. Capitals relocated; Atlanta is Georgia’s eighth(?) I think since statehood. Sometimes they were weirder. Until 1854 Rhode Island rotated between five cities, to the surprise of people trying to name a third city in Rhode Island. New Jersey settled on Trenton as compromise between the East and West Jersey capitals of Perth Amboy and Burlington. But if you look for a city that’s fairly central but not the biggest in the state you get to the capital pretty often.

So these are historical and cultural factors which combine to make a Voronoi Diagram map of the United States strange, but not impossibly strange, compared to what has really happened. Things are rarely so arbitrary as they seem at first.

The End 2016 Mathematics A To Z: Yang Hui’s Triangle


Today’s is another request from gaurish and another I’m glad to have as it let me learn things too. That’s a particularly fun kind of essay to have here.

Yang Hui’s Triangle.

It’s a triangle. Not because we’re interested in triangles, but because it’s a particularly good way to organize what we’re doing and show why we do that. We’re making an arrangement of numbers. First we need cells to put the numbers in.

Start with a single cell in what’ll be the top middle of the triangle. It spreads out in rows beneath that. The rows are staggered. The second row has two cells, each one-half width to the side of the starting one. The third row has three cells, each one-half width to the sides of the row above, so that its center cell is directly under the original one. The fourth row has four cells, two of which are exactly underneath the cells of the second row. The fifth row has five cells, three of them directly underneath the third row’s cells. And so on. You know the pattern. It’s the one that pins in a plinko board take. Just trimmed down to a triangle. Make as many rows as you find interesting. You can always add more later.

In the top cell goes the number ‘1’. There’s also a ‘1’ in the leftmost cell of each row, and a ‘1’ in the rightmost cell of each row.

What of interior cells? The number for those we work out by looking to the row above. Take the cells to the immediate left and right of it. Add the values of those together. So for example the center cell in the third row will be ‘1’ plus ‘1’, commonly regarded as ‘2’. In the third row the leftmost cell is ‘1’; it always is. The next cell over will be ‘1’ plus ‘2’, from the row above. That’s ‘3’. The cell next to that will be ‘2’ plus ‘1’, a subtly different ‘3’. And the last cell in the row is ‘1’ because it always is. In the fourth row we get, starting from the left, ‘1’, ‘4’, ‘6’, ‘4’, and ‘1’. And so on.

It’s a neat little arithmetic project. It has useful application beyond the joy of making something neat. Many neat little arithmetic projects don’t have that. But the numbers in each row give us binomial coefficients, which we often want to know. That is, if we wanted to work out (a + b) to, say, the third power, we would know what it looks like from looking at the fourth row of Yanghui’s Triangle. It will be 1\cdot a^4 + 4\cdot a^3 \cdot b^1 + 6\cdot a^2\cdot b^2 + 4\cdot a^1\cdot b^3 + 1\cdot b^4 . This turns up in polynomials all the time.

Look at diagonals. By diagonal here I mean a line parallel to the line of ‘1’s. Left side or right side; it doesn’t matter. Yang Hui’s triangle is bilaterally symmetric around its center. The first diagonal under the edges is a bit boring but familiar enough: 1-2-3-4-5-6-7-et cetera. The second diagonal is more curious: 1-3-6-10-15-21-28 and so on. You’ve seen those numbers before. They’re called the triangular numbers. They’re the number of dots you need to make a uniformly spaced, staggered-row triangle. Doodle a bit and you’ll see. Or play with coins or pool balls.

The third diagonal looks more arbitrary yet: 1-4-10-20-35-56-84 and on. But these are something too. They’re the tetrahedronal numbers. They’re the number of things you need to make a tetrahedron. Try it out with a couple of balls. Oranges if you’re bored at the grocer’s. Four, ten, twenty, these make a nice stack. The fourth diagonal is a bunch of numbers I never paid attention to before. 1-5-15-35-70-126-210 and so on. This is — well. We just did tetrahedrons, the triangular arrangement of three-dimensional balls. Before that we did triangles, the triangular arrangement of two-dimensional discs. Do you want to put in a guess what these “pentatope numbers” are about? Sure, but you hardly need to. If we’ve got a bunch of four-dimensional hyperspheres and want to stack them in a neat triangular pile we need one, or five, or fifteen, or so on to make the pile come out neat. You can guess what might be in the fifth diagonal. I don’t want to think too hard about making triangular heaps of five-dimensional hyperspheres.

There’s more stuff lurking in here, waiting to be decoded. Add the numbers of, say, row four up and you get two raised to the third power. Add the numbers of row ten up and you get two raised to the ninth power. You see the pattern. Add everything in, say, the top five rows together and you get the fifth Mersenne number, two raised to the fifth power (32) minus one (31, when we’re done). Add everything in the top ten rows together and you get the tenth Mersenne number, two raised to the tenth power (1024) minus one (1023).

Or add together things on “shallow diagonals”. Start from a ‘1’ on the outer edge. I’m going to suppose you started on the left edge, but remember symmetry; it’ll be fine if you go from the right instead. Add to that ‘1’ the number you get by moving one cell to the right and going up-and-right. And then again, go one cell to the right and then one cell up-and-right. And again and again, until you run out of cells. You get the Fibonacci sequence, 1-1-2-3-5-8-13-21-and so on.

We can even make an astounding picture from this. Take the cells of Yang Hui’s triangle. Color them in. One shade if the cell has an odd number, another if the cell has an even number. It will create a pattern we know as the Sierpiński Triangle. (Wacław Sierpiński is proving to be the surprise special guest star in many of this A To Z sequence’s essays.) That’s the fractal of a triangle subdivided into four triangles with the center one knocked out, and the remaining triangles them subdivided into four triangles with the center knocked out, and on and on.

By now I imagine even my most skeptical readers agree this is an interesting, useful mathematical construct. Also that they’re wondering why I haven’t said the name “Blaise Pascal”. The Western mathematical tradition knows of this from Pascal’s work, particularly his 1653 Traité du triangle arithmétique. But mathematicians like to say their work is universal, and independent of the mere human beings who find it. Constructions like this triangle give support to this. Yang lived in China, in the 12th century. I imagine it possible Pascal had hard of his work or been influenced by it, by some chain, but I know of no evidence that he did.

And even if he had, there are other apparently independent inventions. The Avanti Indian astronomer-mathematician-astrologer Varāhamihira described the addition rule which makes the triangle work in commentaries written around the year 500. Omar Khayyám, who keeps appearing in the history of science and mathematics, wrote about the triangle in his 1070 Treatise on Demonstration of Problems of Algebra. Again so far as I am aware there’s not a direct link between any of these discoveries. They are things different people in different traditions found because the tools — arithmetic and aesthetically-pleasing orders of things — were ready for them.

Yang Hui wrote about his triangle in the 1261 book Xiangjie Jiuzhang Suanfa. In it he credits the use of the triangle (for finding roots) as invented around 1100 by mathematician Jia Xian. This reminds us that it is not merely mathematical discoveries that are found by many peoples at many times and places. So is Boyer’s Law, discovered by Hubert Kennedy.

Theorem Thursday: The Five-Color Map Theorem


People think mathematics is mostly counting and arithmetic. It’s what we get at when we say “do the math[s]”. It’s why the mathematician in the group is the one called on to work out what the tip should be. Heck, I attribute part of my love for mathematics to a Berenstain Bears book which implied being a mathematician was mostly about adding up sums in a base on the Moon, which is an irresistible prospect. In fact, usually counting and arithmetic are, at least, minor influences on real mathematics. There are legends of how catastrophically bad at figuring mathematical genius can be. But usually isn’t always, and this week I’d like to show off a case where counting things and adding things up lets us prove something interesting.

The Five-Color Map Theorem.

No, not four. I imagine anyone interested enough to read a mathematics blog knows the four-color map theorem. It says that you only need four colors to color a map. That’s true, given some qualifiers. No discontiguous chunks that need the same color. Two regions with the same color can touch at a point, they just can’t share a line or curve. The map is on a plane or the surface of a sphere. Probably some other requirements. I’m not going to prove that. Nobody has time for that. The best proofs we’ve figured out for it amount to working out how every map fits into one of a huge number of cases, and trying out each case. It’s possible to color each of those cases with only four colors, so, we’re done. Nice but unenlightening and way too long to deal with.

The five-color map theorem is a lot like the four-color map theorem, with this difference: it says that you only need five colors to color a map. Same qualifiers as before. Yes, it’s true because the four-color map theorem is true and because five is more than four. We can do better than that. We can prove five colors are enough even without knowing whether four colors will do. And it’s easy. The ease of the five-color map theorem gave people reason to think four colors would be maybe harder but still manageable.

The proof I want to show uses one of mathematicians’ common tricks. It employs the same principle which Hercules used to slay the Hydra, although it has less cauterizing lake-monster flesh with flaming torches, as that’s considered beneath the dignity of the Academy anymore except when grading finals for general-requirements classes. The part of the idea we do use is to take a problem which we might not be able to do and cut it down to one we can do. Properly speaking this is a kind of induction proof. In those we start from problems we can do and show that if we can do those, we can do all the complicated problems. But we come at it by cutting down complicated problems and making them simple ones.

So suppose we start with a map that’s got some huge number of territories to color. I’m going to start with the United States states which were part of the Dominion of New England. As I’m sure I don’t need to remind any readers, American or otherwise, this was a 17th century attempt by the English to reorganize their many North American colonies into something with fewer administrative irregularities. It lasted almost long enough for the colonists to hear about it. At that point the Glorious Revolution happened (not involving the colonists) and everybody went back to what they were doing before.

Please enjoy my little map of the place. It gives all the states a single color because I don’t really know how to use QGIS and it would probably make my day job easier if I did. (Well, QGIS is open-source software, so its interface is a disaster and its tutorials gibberish. The only way to do something with it is to take flaming torches to it.)

Map showing New York, New Jersey, and New England (Connecticut, Rhode Island, Massachusetts, Vermont, New Hampshire, and Maine) in a vast white space.
States which, in their 17th-century English colonial form, were part of the Dominion of New England (1685-1689). More or less. If I’ve messed up don’t tell me as it doesn’t really matter for this problem.

There’s eight regions here, eight states, so it’s not like we’re at the point we can’t figure how to color this with five different colors. That’s all right. I’m using this for a demonstration. Pretend the Dominion of New England is so complicated we can’t tell whether five colors are enough. Oh, and a spot of lingo: if five colors are enough to color the map we say the map is “colorable”. We say it’s “5-colorable” if we want to emphasize five is enough colors.

So imagine that we erase the border between Maine and New Hampshire. Combine them into a single state over the loud protests of the many proud, scary Mainers. But if this simplified New England is colorable, so is the real thing. There’s at least one color not used for Greater New Hampshire, Vermont, or Massachusetts. We give that color to a restored Maine. If the simplified map can be 5-colored, so can the original.

Maybe we can’t tell. Suppose the simplified map is still too complicated to make it obvious. OK, then. Cut out another border. How about we offend Roger Williams partisans and merge Rhode Island into Massachusetts? Massachusetts started out touching five other states, which makes it a good candidate for a state that needed a sixth color. With Rhode Island reduced to being a couple counties of the Bay State, Greater Massachusetts only touches four other states. It can’t need a sixth color. There’s at least one of our original five that’s free.

OK, but, how does that help us find a color for Rhode Island? Maine it’s easy to see why there’s a free color. But Rhode Island?

Well, it’ll have to be the same color as either Greater New Hampshire or Vermont or New York. At least one of them has to be available. Rhode Island doesn’t touch them. Connecticut’s color is out because Rhode Island shares a border with it. Same with Greater Massachusetts’s color. But we’ve got three colors for the taking.

But is our reduced map 5-colorable? Even with Maine part of New Hampshire and Rhode Island part of Massachusetts it might still be too hard to tell. There’s six territories in it, after all. We can simplify things a little. Let’s reverse the treason of 1777 and put Vermont back into New York, dismissing New Hampshire’s claim on the territory as obvious absurdity. I am never going to be allowed back into New England. This Greater New York needs one color for itself, yes. And it touches four other states. But these neighboring states don’t touch each other. A restored Vermont could use the same color as New Jersey or Connecticut. Greater Massachusetts and Greater New Hampshire are unavailable, but there’s still two choices left.

And now look at the map we have remaining. There’s five states in it: Greater New Hampshire, Greater Massachusetts, Greater New York, Regular Old Connecticut, and Regular old New Jersey. We have five colors. Obviously we can give the five territories different colors.

This is one case, one example map. That’s all we need. A proper proof makes things more abstract, but uses the same pattern. Any map of a bunch of territories is going to have at least one territory that’s got at most five neighbors. Maybe it will have several. Look for one of them. If you find a territory with just one neighbor, such as Maine had, remove that border. You’ve got a simpler map and there must be a color free for the restored territory.

If you find a territory with just two neighbors, such as Rhode Island, take your pick. Merge it with either neighbor. You’ll still have at least one color free for the restored territory. With three neighbors, such as Vermont or Connecticut, again you have your choice. Merge it with any of the three neighbors. You’ll have a simpler map and there’ll be at least one free color.

If you have four neighbors, the way New York has, again pick a border you like and eliminate that. There is a catch. You can imagine one of the neighboring territories reaching out and wrapping around to touch the original state on more than one side. Imagine if Massachusetts ran far out to sea, looped back through Canada, and came back to touch New Jersey, Vermont from the north, and New York from the west. That’s more of a Connecticut stunt to pull, I admit. But that’s still all right. Most of the colonies tried this sort of stunt. And even if Massachusetts did that, we would have colors available. It would be impossible for Vermont and New Jersey to touch. We’ve got a theorem that proves it.

Yes, it’s the Jordan Curve Theorem, here to save us right when we might get stuck. Just like I promised last week. In this case some part of the border of New York and Really Big Massachusetts serves as our curve. Either Vermont or New Jersey is going to be inside that curve, and the other state is outside. They can’t touch. Thank you.

If you have five neighbors, the way Massachusetts has, well, maybe you’re lucky. We are here. None of its neighboring states touches more than two others. We can cut out a border easily and have colors to spare. But we could be in trouble. We could have a map in which all the bordering states touch three or four neighbors and that seems like it would run out of colors. Let me show a picture of that.

The map shows a pentagonal region A which borders five regions, B, C, D, E, and F. Each of those regions borders three or four others. B is entirely enclosed by regions A, C, and D, although from B's perspective they're all enclosed by it.
A hypothetical map with five regions named by an uninspired committee.

So this map looks dire even when you ignore that line that looks like it isn’t connected where C and D come together. Flood fill didn’t run past it, so it must be connected. It just doesn’t look right. Everybody has four neighbors except the province of B, which has three. The province of A has got five. What can we do?

Call on the Jordan Curve Theorem again. At least one of the provinces has to be landlocked, relative to the others. In this case, the borders of provinces A, D, and C come together to make a curve that keeps B in the inside and E on the outside. So we’re free to give B and E the same color. We treat this in the proof by doing a double merger. Erase the boundary between provinces A and B, and also that between provinces A and E. (Or you might merge B, A, and F together. It doesn’t matter. The Jordan Curve Theorem promises us there’ll be at least one choice and that’s all we need.)

So there we have it. As long as we have a map that has some provinces with up to five neighbors, we can reduce the map. And reduce it again, if need be, and again and again. Eventually we’ll get to a map with only five provinces and that has to be 5-colorable.

Just … now … one little nagging thing. We’re relying on there always being some province with at most five neighbors. Why can’t there be some horrible map where every province has six or more neighbors?

Counting will tell us. Arithmetic will finish the job. But we have to get there by way of polygons.

That is, the easiest way to prove this depends on a map with boundaries that are all polygons. That’s all right. Polygons are almost the polynomials of geometry. You can make a polygon that looks so much like the original shape the eye can’t tell the difference. Look at my Dominion of New England map. That’s computer-rendered, so it’s all polygons, and yet all those shore and river boundaries look natural.

But what makes up a polygon? Well, it’s a bunch of straight lines. We call those ‘edges’. Each edge starts and ends at a corner. We call those ‘vertices’. These edges come around and close together to make a ‘face’, a territory like we’ve been talking about. We’re going to count all the regions that have a certain number of neighboring other regions.

Specifically, F2 will represent however many faces there are that have two sides. F3 will represent however many faces there are that have three sides. F4 will represent however many faces there are that have four sides. F10 … yeah, you got this.

One thing you didn’t get. The outside counts as a face. We need this to make the count come out right, so we can use some solid-geometry results. In my map that’s the vast white space that represents the Atlantic Ocean, the other United States, the other parts of Canada, the Great Lakes, all the rest of the world. So Maine, for example, belongs to F2 because it touches New Hampshire and the great unknown void of the rest of the universe. Rhode Island belongs to F3 similarly. New Hampshire’s in F4.

Any map has to have at least one thing that’s in F2, F3, F4, or F5. They touch at most two, three, four or five neighbors. (If they touched more, they’d represent a face that was a polygon of even more sides.)

How do we know? It comes from Euler’s Formula, which starts out describing the ways corners and edges and faces of a polyhedron fit together. Our map, with its polygon on the surface of the sphere, turns out to be just as good as a polyhedron. It looks a little less blocky, but that doesn’t show.

By Euler’s Formula, there’s this neat relationship between the number of vertices, the number of edges, and the number of faces in a polyhedron. (This is the same Leonhard Euler famous for … well, everything in mathematics, really. But in this case it’s for his work with shapes.) It holds for our map too. Call the number of vertices V. Call the number of edges E. Call the number of faces F. Then:

V - E + F = 2

Always true. Try drawing some maps yourself, using simple straight lines, and see if it works. For that matter, look at my Really Really Simplified map and see if it doesn’t hold true still.

One of those blocky diagrams of New York, New Jersey, and New England, done in that way transit maps look, only worse because I'm not so good at this.
A very simplified blocky diagram of my Dominion of New England, with the vertices and edges highlighted so they’re easy to count if you want to do that.

Here’s one of those insights that’s so obvious it’s hard to believe. Every edge ends in two vertices. Three edges meet at every vertex. (We don’t have more than three territories come together at a point. If that were to happen, we’d change the map a little to find our coloring and then put it back afterwards. Pick one of the territories and give it a disc of area from the four or five or more corners. The troublesome corner is gone. Once we’ve done with our proof, shrink the disc back down to nothing. Coloring done!) And therefore 2E = 3V .

A polygon has the same number of edges as vertices, and if you don’t believe that then draw some and count. Every edge touches exactly two regions. Every vertex touches exactly three edges. So we can rework Euler’s formula. Multiply it by six and we get 6V - 6E + 6F = 12 . And from doubling the equation about edges and vertices equation in the last paragraph, 4E = 6V . So if we break up that 6E into 4E and 2E we can rewrite that Euler’s formula again. It becomes 6V - 4E - 2E + 6F = 12. 6V – 4E is zero, so, -2E + 6F = 12 .

Do we know anything about F itself?

Well, yeah. F = F_2 + F_3 + F_4 + F_5 + F_6 + \cdots . The number of faces has to equal the sum of the number of faces of two edges, and of three edges, and of four edges, and of five edges, and of six edges, and on and on. Counting!

Do we know anything about how E and F relate?

Well, yeah. A polygon in F2 has two edges. A polygon in F3 has three edges. A polygon in F4 has four edges. And each edge runs up against two faces. So therefore 2E = 2F_2 + 3F_3 + 4F_4 + 5F_5 + 6F_6 + \cdots . This goes on forever but that’s all right. We don’t need all these terms.

Because here’s what we do have. We know that -2E + 6F = 12 . And we know how to write both E and F in terms of F2, F3, F4, and so on. We’re going to show at least one of these low-subscript Fsomethings has to be positive, that is, there has to be at least one of them.

Start by just shoving our long sum expressions into the modified Euler’s Formula we had. That gives us this:

-(2F_2 + 3F_3 + 4F_4 + 5F_5 + 6F_6 + \cdots) + 6(F_2 + F_3 + F_4 + F_5 + F_6 + \cdots) = 12

Doesn’t look like we’ve got anywhere, does it? That’s all right. Multiply that -1 and that 6 into their parentheses. And then move the terms around, so that we group all the terms with F2 together, and all the terms with F3 together, and all the terms with F4 together, and so on. This gets us to:

(-2 + 6) F_2 + (-3 + 6) F_3 + (-4 + 6) F_4 + (-5 + 6) F_5  + (-6 + 6) F_6 + (-7 + 6) F_7 + (-8 + 6) F_8 + \cdots = 12

I know, that’s a lot of parentheses. And it adds negative numbers to positive which I guess we’re allowed to do but who wants to do that? Simplify things a little more:

4 F_2 + 3 F_3 + 2 F_4 + 1 F_5 + 0 F_6 - 1 F_7 - 2 F_8 - \cdots = 12

And now look at that. Each Fsubscript has to be zero or a positive number. You can’t have a negative number of shapes. If you can I don’t want to hear about it. Most of those Fsubscript‘s get multiplied by a negative number before they’re added up. But the sum has to be a positive number.

There’s only one way that this sum can be a positive number. At least one of F2, F3, F4, or F5 has to be a positive number. So there must be at least one region with at most five neighbors. And that’s true without knowing anything about our map. So it’s true about the original map, and it’s true about a simplified map, and about a simplified-more map, and on and on.

And that is why this hydra-style attack method always works. We can always simplify a map until it obviously can be colored with five colors. And we can go from that simplified map back to the original map, and color it in just fine. Formally, this is an existence proof: it shows there must be a way to color a map with five colors. But it does so the devious way, by showing a way to color the map. We don’t get enough existence proofs like that. And, at its critical point, we know the proof is true because we can count the number of regions and the number of edges and the number of corners they have. And we can add and subtract those numbers in the right way. Just like people imagine mathematicians do all day.

Properly this works only on the surface of a sphere. Euler’s Formula, which we use for the proof, depends on that. We get away with it on a piece of paper because we can pretend this is just a part of the globe so small we don’t see how flat it is. The vast white edge we suppose wraps around the whole world. And that’s fine since we mostly care about maps on flat surfaces or on globes. If we had a map that needed three dimensions, like one that looked at mining and water and overflight and land-use rights, things wouldn’t be so easy. Nor would they work at all if the map turned out to be on an exotic shape like a torus, a doughnut shape.

But this does have a staggering thought. Suppose we drew boundary lines. And suppose we found an arrangement of them so that we needed more than five colors. This would tell us that we have to be living on a surface such as a torus, the doughnut shape. We could learn something about the way space is curved by way of an experiment that never looks at more than where two regions come together. That we can find information about the whole of space, global information, by looking only at local stuff amazes me. I hope it at least surprises you.

From fiddling with this you probably figure the four-color map theorem should follow right away. Maybe involve a little more arithmetic but nothing too crazy. I agree, it so should. It doesn’t. Sorry.

How Interesting Is A Baseball Score? Some Partial Results


Meanwhile I have the slight ongoing quest to work out the information-theory content of sports scores. For college basketball scores I made up some plausible-looking score distributions and used that. For professional (American) football I found a record of all the score outcomes that’ve happened, and how often. I could use experimental results. And I’ve wanted to do other sports. Soccer was asked for. I haven’t been able to find the scoring data I need for that. Baseball, maybe the supreme example of sports as a way to generate statistics … has been frustrating.

The raw data is available. Retrosheet.org has logs of pretty much every baseball game, going back to the forming of major leagues in the 1870s. What they don’t have, as best I can figure, is a list of all the times each possible baseball score has turned up. That I could probably work out, when I feel up to writing the scripts necessary, but “work”? Ugh.

Some people have done the work, although they haven’t shared all the results. I don’t blame them; the full results make for a boring sort of page. “The Most Popular Scores In Baseball History”, at ValueOverReplacementGrit.com, reports the top ten most common scores from 1871 through 2010. The essay also mentions that as of then there were 611 unique final scores. And that lets me give some partial results, if we trust that blogger post from people I never heard of before are accurate and true. I will make that assumption over and over here.

There’s, in principle, no limit to how many scores are possible. Baseball contains many implied infinities, and it’s not impossible that a game could end, say, 580 to 578. But it seems likely that after 139 seasons of play there can’t be all that many more scores practically achievable.

Suppose then there are 611 possible baseball score outcomes, and that each of them is equally likely. Then the information-theory content of a score’s outcome is negative one times the logarithm, base two, of 1/611. That’s a number a little bit over nine and a quarter. You could deduce the score for a given game by asking usually nine, sometimes ten, yes-or-no questions from a source that knew the outcome. That’s a little higher than the 8.7 I worked out for football. And it’s a bit less than the 10.8 I estimate for college basketball.

And there’s obvious rubbish there. In no way are all 611 possible outcomes equally likely. “The Most Popular Scores In Baseball History” says that right there in the essay title. The most common outcome was a score of 3-2, with 4-3 barely less popular. Meanwhile it seems only once, on the 28th of June, 1871, has a baseball game ended with a score of 49-33. Some scores are so rare we can ignore them as possibilities.

(You may wonder how incompetent baseball players of the 1870s were that a game could get to 49-33. Not so bad as you imagine. But the equipment and conditions they were playing with were unspeakably bad by modern standards. Notably, the playing field couldn’t be counted on to be flat and level and well-mowed. There would be unexpected divots or irregularities. This makes even simple ground balls hard to field. The baseball, instead of being replaced with every batter, would stay in the game. It would get beaten until it was a little smashed shell of unpredictable dynamics and barely any structural integrity. People were playing without gloves. If a game ran long enough, they would play at dusk, without lights, with a muddy ball on a dusty field. And sometimes you just have four innings that get out of control.)

What’s needed is a guide to what are the common scores and what are the rare scores. And I haven’t found that, nor worked up the energy to make the list myself. But I found some promising partial results. In a September 2008 post on Baseball-Fever.com, user weskelton listed the 24 most common scores and their frequency. This was for games from 1993 to 2008. One might gripe that the list only covers fifteen years. True enough, but if the years are representative that’s fine. And the top scores for the fifteen-year survey look to be pretty much the same as the 139-year tally. The 24 most common scores add up to just over sixty percent of all baseball games, which leaves a lot of scores unaccounted for. I am amazed that about three in five games will have a score that’s one of these 24 choices though.

But that’s something. We can calculate the information content for the 25 outcomes, one each of the 24 particular scores and one for “other”. This will under-estimate the information content. That’s because “other” is any of 587 possible outcomes that we’re not distinguishing. But if we have a lower bound and an upper bound, then we’ve learned something about what the number we want can actually be. The upper bound is that 9.25, above.

The information content, the entropy, we calculate from the probability of each outcome. We don’t know what that is. Not really. But we can suppose that the frequency of each outcome is close to its probability. If there’ve been a lot of games played, then the frequency of a score and the probability of a score should be close. At least they’ll be close if games are independent, if the score of one game doesn’t affect another’s. I think that’s close to true. (Some games at the end of pennant races might affect each other: why try so hard to score if you’re already out for the year? But there’s few of them.)

The entropy then we find by calculating, for each outcome, a product. It’s minus one times the probability of that outcome times the base-two logarithm of the probability of that outcome. Then add up all those products. There’s good reasons for doing it this way and in the college-basketball link above I give some rough explanations of what the reasons are. Or you can just trust that I’m not lying or getting things wrong on purpose.

So let’s suppose I have calculated this right, using the 24 distinct outcomes and the one “other” outcome. That makes out the information content of a baseball score’s outcome to be a little over 3.76 bits.

As said, that’s a low estimate. Lumping about two-fifths of all games into the single category “other” drags the entropy down.

But that gives me a range, at least. A baseball game’s score seems to be somewhere between about 3.76 and 9.25 bits of information. I expect that it’s closer to nine bits than it is to four bits, but will have to do a little more work to make the case for it.

A Timeline Of Mathematics Education


https://twitter.com/dannytybrown/status/670174694390239232

As Danny Brown’s tweet above promises, this is an interesting timeline. It’s a “work in progress” presentation by one David Allen that tries to summarize the major changes in the teaching of mathematics in the United States.

It’s a presentation made on Prezi, and it appears to require Flash (and at one point it breaks, at least on my computer, and I have to move around rather than use the forward/backward buttons). And the compilation is cryptic. It reads better as a series of things for further research than anything else. Still, it’s got fascinating data points, such as when algebra became a prerequisite for college, and when it and geometry moved from being college-level mathematics to high school-level mathematics.

Reading the Comics, November 1, 2015: Uncertainty and TV Schedules Edition


Brian Fies’s Mom’s Cancer is a heartbreaking story. It’s compelling reading, but people who are emotionally raw from lost love ones, or who know they’re particularly sensitive to such stories, should consider before reading that the comic is about exactly what the title says.

But it belongs here because in the October 29th and the November 2nd installments are about a curiosity of area, and volume, and hypervolume, and more. That is that our perception of how big a thing is tends to be governed by one dimension, the length or the diameter of the thing. But its area is the square of that, its volume the cube of that, its hypervolume some higher power yet of that. So very slight changes in the diameter produce great changes in the volume. Conversely, though, great changes in volume will look like only slight changes. This can hurt.

Tom Toles’s Randolph Itch, 2 am from the 29th of October is a Roman numerals joke. I include it as comic relief. The clock face in the strip does depict 4 as IV. That’s eccentric but not unknown for clock faces; IIII seems to be more common. There’s not a clear reason why this should be. The explanation I find most nearly convincing is an aesthetic one. Roman numerals are flexible things, and can be arranged for artistic virtue in ways that Arabic numerals make impossible.

The aesthetic argument is that the four-character symbol IIII takes up nearly as much horizontal space as the VIII opposite it. The two-character IV would look distractingly skinny. Now, none of the symbols takes up exactly the same space as their counterpart. X is shorter than II, VII longer than V. But IV-versus-VIII does seem like the biggest discrepancy. Still, Toles’s art shows it wouldn’t look all that weird. And he had to conserve line strokes, so that the clock would read cleanly in newsprint. I imagine he also wanted to avoid using different representations of “4” so close together.

Jon Rosenberg’s Scenes From A Multiverse for the 29th of October is a riff on both quantum mechanics — Schödinger’s Cat in a box — and the uncertainty principle. The uncertainty principle can be expressed as a fascinating mathematical construct. It starts with Ψ, a probability function that has spacetime as its domain, and the complex-valued numbers as its range. By applying a function to this function we can derive yet another function. This function-of-a-function we call an operator, because we’re saying “function” so much it’s starting to sound funny. But this new function, the one we get by applying an operator to Ψ, tells us the probability that the thing described is in this place versus that place. Or that it has this speed rather than that speed. Or this angular momentum — the tendency to keep spinning — versus that angular momentum. And so on.

If we apply an operator — let me call it A — to the function Ψ, we get a new function. What happens if we apply another operator — let me call it B — to this new function? Well, we get a second new function. It’s much the way if we take a number, and multiply it by another number, and then multiply it again by yet another number. Of course we get a new number out of it. What would you expect? This operators-on-functions things looks and acts in many ways like multiplication. We even use symbols that look like multiplication: AΨ is operator A applied to function Ψ, and BAΨ is operator B applied to the function AΨ.

Now here is the thing we don’t expect. What if we applied operator B to Ψ first, and then operator A to the product? That is, what if we worked out ABΨ? If this was ordinary multiplication, then, nothing all that interesting. Changing the order of the real numbers we multiply together doesn’t change what the product is.

Operators are stranger creatures than real numbers are. It can be that BAΨ is not the same function as ABΨ. We say this means the operators A and B do not commute. But it can be that BAΨ is exactly the same function as ABΨ. When this happens we say that A and B do commute.

Whether they do or they don’t commute depends on the operators. When we know what the operators are we can say whether they commute. We don’t have to try them out on some functions and see what happens, although that sometimes is the easiest way to double-check your work. And here is where we get the uncertainty principle from.

The operator that lets us learn the probability of particles’ positions does not commute with the operator that lets us learn the probability of particles’ momentums. We get different answers if we measure a particle’s position and then its velocity than we do if we measure its velocity and then its position. (Velocity is not the same thing as momentum. But they are related. There’s nothing you can say about momentum in this context that you can’t say about velocity.)

The uncertainty principle is a great source for humor, and for science fiction. It seems to allow for all kinds of magic. Its reality is no less amazing, though. For example, it implies that it is impossible for an electron to spiral down into the nucleus of an atom, collapsing atoms the way satellites eventually fall to Earth. Matter can exist, in ways that let us have solid objects and chemistry and biology. This is at least as good as a cat being perhaps boxed.

Jan Eliot’s Stone Soup Classics for the 29th of October is a rerun from 1995. (The strip itself has gone to Sunday-only publication.) It’s a joke about how arithmetic is easy when you have the proper motivation. In 1995 that would include catching TV shows at a particular time. You see, in 1995 it was possible to record and watch TV shows when you wanted, but it required coordinating multiple pieces of electronics. It would often be easier to just watch when the show actually aired. Today we have it much better. You can watch anything you want anytime you want, using any piece of consumer electronics you have within reach, including several current models of microwave ovens and programmable thermostats. This does, sadly, remove one motivation for doing arithmetic. Also, I’m not certain the kids’ TV schedule is actually consistent with what was on TV in 1995.

Oh, heck, why not. Obviously we’re 14 minutes before the hour. Let me move onto the hour for convenience. It’s 744 minutes to the morning cartoons; that’s 12.4 hours. Taking the morning cartoons to start at 8 am, that means it’s currently 14 minutes before 24 minutes before 8 pm. I suspect a rounding error. Let me say they’re coming up on 8 pm. 194 minutes to Jeopardy implies the game show is on at 11 pm. 254 minutes to The Simpsons puts that on at midnight, which is probably true today, though I don’t think it was so in 1995 just yet. 284 minutes to Grace puts that on at 12:30 am.

I suspect that Eliot wanted it to be 978 minutes to the morning cartoons, which would bump Oprah to 4:00, Jeopardy to 7:00, Simpsons and Grace to 8:00 and 8:30, and still let the cartoons begin at 8 am. Or perhaps the kids aren’t that great at arithmetic yet.

Stephen Beals’s Adult Children for the 30th of October tries to build a “math error” out of repeated use of the phrase “I couldn’t care less”. The argument is that the thing one cares least about is unique. But why can’t there be two equally least-cared-about things?

We can consider caring about things as an optimization problem. Optimization problems are about finding the most of something given some constraints. If you want the least of something, multiply the thing you have by minus one and look for the most of that. You may giggle at this. But it’s the sensible thing to do. And many things can be equally high, or low. Take a bundt cake pan, and drizzle a little water in it. The water separates into many small, elliptic puddles. If the cake pan were perfectly formed, and set on a perfectly level counter, then the bottom of each puddle would be at the same minimum height. I grant a real cake pan is not perfect; neither is any counter. But you can imagine such.

Just because you can imagine it, though, must it exist? Think of the “smallest positive number”. The idea is simple. Positive numbers are a set of numbers. Surely there’s some smallest number. Yet there isn’t; name any positive number and we can name a smaller number. Divide it by two, for example. Zero is smaller than any positive number, but it’s not itself a positive number. A minimum might not exist, at least not within the confines where we are to look. It could be there is not something one could not care less about.

So a minimum might or might not exist, and it might or might not be unique. This is why optimization problems are exciting, challenging things.

A bedbug declares that 'according to our quantum mechanical computations, our entire observable universe is almost certainly Fred Wardle's bed.'
Niklas Eriksson’s Carpe Diem for the 1st of November, 2015. I’m not sure how accurately the art depicts bedbugs, although I’m also not sure how accurately Eriksson should.

Niklas Eriksson’s Carpe Diem for the 1st of November is about understanding the universe by way of observation and calculation. We do rely on mathematics to tell us things about the universe. Immanuel Kant has a bit of reputation in mathematical physics circles for this observation. (I admit I’ve never seen the original text where Kant observed this, so I may be passing on an urban legend. My love has several thousands of pages of Kant’s writing, but I do not know if any of them touch on natural philosophy.) If all we knew about space was that gravitation falls off as the square of the distance between two things, though, we could infer that space must have three dimensions. Otherwise that relationship would not make geometric sense.

Jeff Harris’s kids-information feature Shortcuts for the 1st of November was about the Harvard Computers. By this we mean the people who did the hard work of numerical computation, back in the days before this could be done by electrical and then electronic computer. Mathematicians relied on people who could do arithmetic in those days. There is the folkloric belief that mathematicians are inherently terrible at arithmetic. (I suspect the truth is people assume mathematicians must be better at arithmetic than they really are.) But here, there’s the mathematics of thinking what needs to be calculated, and there’s the mathematics of doing the calculations.

Their existence tends to be mentioned as a rare bit of human interest in numerical mathematics books, usually in the preface in which the author speaks with amazement of how people who did computing were once called computers. I wonder if books about font and graphic design mention how people who typed used to be called typewriters in their prefaces.

Who Was Jonas Moore?


I imagine I’m not the only person to have not realized the anniversary of Jonas Moore’s death was upon us again. Granted he’s not in anyone’s short list of figures from mathematical history. The easiest thing to say about him is that he appears to have coined common shorthands for the trigonometric functions: cot for cotangent, that sort of thing. Perhaps nothing exciting, but it’s something that had to be done.

Moore’s more interesting than that. The Renaissance Mathematicus has a biographic essay. Particularly of interest is that Moore oversaw the building of the Royal Observatory in Greenwich, and paid for the first instruments put into it. And, with Samuel Pepys, he founded the Royal Mathematical School at Christ’s Hospital, to train men in scientific navigation. As such he’s got a place in the story of longitude, and time-keeping, and our understanding of how to measure things.

That won’t put him onto your short list of important figures in the history of mathematics and science. But it’s interesting anyway.

Reading the Comics, June 20, 2015: Blatantly Padded Edition, Part 1


I confess. I’m padding my post count with the end-of-the-week roundup of mathematically-themed comic strips. While what I’ve got is a little long for a single post it’s not outrageously long. But I realized that if I split this into two pieces then, given how busy last week was around here, and how I have an A To Z post ready for Monday already, I could put together a string of eight days of posting. And that would look so wonderful in the “fireworks display” of posts that WordPress puts together for its annual statistics report. Please don’t think worse of me for it.

John Graziano’s Ripley’s Believe It or Not (June 17) presents the trivia point that Harvard University is older than calculus. That’s fair enough to say, although I don’t think it merits Graziano’s exclamation point. A proper historical discussion of when calculus was invented has to be qualified. It’s a big, fascinating invention; such things don’t have unambiguous origin dates. You can see what are in retrospect obviously the essential ideas of calculus in historical threads weaving through thousands of years and every mathematically-advanced culture. But calculus as we know it, the set of things that you will see in an Introduction To Calculus textbook, got organized into a coherent set of ides that we call that, now, in the late 17th century. Most of its notation took shape by the mid-18th century, especially as Leonhard Euler promoted many of the symbols and much of the notation that we still use today.

John Graziano’s Ripley’s Believe It or Not is still a weird attribution even if I can’t think of a better one.

Hector D Cantu and Carlos Castellanos’s Baldo (June 18) reminds us that all you really need to do mathematics well is have a problem which you’re interested in. But what isn’t that true of?

J C Duffy’s The Fusco Brothers (June 18) is about the confusion between what positive and negative mean in test screenings. I’ve written about this before. The use of positive for what is typically bad news, and negative for what is typically good news, seems to trace to statistical studies. The test amounts to an experiment. We measure something in a complicated system, like a body. Is that measurement consistent with what we might normally expect, or is it so far away from normal that it’s implausible that it might be just chance? The “positive” then reflects finding that whatever is measured is unlikely to be that far from normal just by chance.

Larry Wright’s Motley (June 18, rerun from June 18, 1987) uses a bit of science and mathematics as a signifier of intelligence. In the context of a game show, though, “23686 π” is an implausible answer. Unless the question was “what’s the area of a circle with radius 23683?” there’s just no way 2368 would even come up. I suspect “hydromononucleatic acid” isn’t at thing either.

Mark Parisi’s Off The Mark (June 18) is this week’s anthropomorphic numerals joke.

Arnold is sparring with a kangaroo. Told to outsmart him, Arnold recites the Pythagorean Theorem. He gets clobbered.
Bud Grace’s The Piranha Club for the 19th of June, 2015.

Bud Grace’s The Piranha Club (June 19) is another strip to use mathematics as a signifier of intelligence. And, hey, guy punched by a kangaroo, what’s not to like? (In the June 20 strip the kangaroo’s joey emerges from a pouch and punches him too, so I suppose the kangaroo’s female, never mind what the 19th says.)

Reading the Comics, April 20, 2015: History of Mathematics Edition


This is a bit of a broad claim, but it seems Comic Strip Master Command was thinking of all mathematics one lead-time ago. There’s a comic about the original invention of mathematics, and another showing off 20th century physics equations. This seems as much of the history of mathematics as one could reasonably expect from the comics page.

Mark Anderson’s Andertoons gets its traditional appearance around here with the April 17th strip. It features a bit of arithmetic that is indeed lovely but wrong.

Continue reading “Reading the Comics, April 20, 2015: History of Mathematics Edition”

Roller Coaster Immortality Update!


Several years ago I had the chance to go to Lakemont Park, in Altoona, Pennsylvania. It’s a lovely and very old amusement park, featuring the oldest operating roller coaster, Leap The Dips. As roller coasters go it’s not very large and not very fast, but it’s a great ride. It does literally and without exaggeration leap off the track, though not far enough to be dangerous. I recommend the park and the ride to people who have cause to be in the middle of Pennsylvania.

I wondered whether any boards in it might date from the original construction in 1902 by the E Joy Morris company. If we make some assumptions we can turn this into a probability problem. It’s a problem of a type that always seems to be answered 1/e. (The problem is “what is the probability that any particular piece of wood has lasted 100 years, if a piece of wood has a one percent chance of needing replacement every year?”) That’s a probability of about 37 percent. But I doubted this answer meant anything. My skepticism came from wondering why every piece of wood should be equally likely to survive every year. Different pieces serve different structural roles, and will be exposed to the elements differently. How can I be sure that the probability one piece needs replacement is independent of the probability some other piece needs replacement? But if they’re not independent then my calculation doesn’t give a relevant answer.

The Leap-The-Dips roller coaster at Lakemont Park, Altoona, Pennsylvania.
The Leap-The-Dips roller coaster at Lakemont Park, Altoona, Pennsylvania.

A recent post on the Usenet roller coaster enthusiast newsgroup rec.roller-coaster, in a discussion titled “Age a coaster should be preserved”, suggests I was right in my skepticism. Derek Gee writes:

According to the video documentary the park produced around
1999, all of the original upright lumber was found to be in excellent shape.
The E. Joy Morris company had waterproofed it by sealing it in ten coats of
paint and it was old-growth hardwood. All the horizontal lumber was
replaced as I recall.

I am aware this is not an academically rigorous answer to the question of how much of the roller coaster’s original construction is still in place. But it is a lead. It suggests that quite a bit of the antique ride is as antique as could be.

A bit more about Thomas Hobbes


You might remember a post from last April, Thomas Hobbes and the Doing of Important Mathematics, timed to the renowned philosopher’s birthday. I talked about him because a good bit of his intellectual life was spent trying to achieve mathematical greatness, which he never did.

Recently I’ve had the chance to read Douglas M Jesseph’s Squaring The Circle: The War Between Hobbes And Wallis, about Hobbes’s attempts to re-build mathematics on an intellectual foundation he found more satisfying, and the conflict this put him in with mainstream mathematicians, particularly John Wallis (algebra and calculus pioneer, and popularizer of the ∞ symbol). The situation of Hobbes’s mathematical ambitions is more complicated than I realized, although the one thing history teaches us is that the situation is always more complicated than we realized, and I wanted to at least make my writings about Hobbes a bit less incomplete. Jesseph’s book can’t be fairly reduced to a blog post, of course, and I’d recommend it to people who want to really understand what the fuss was all about. It’s a very good idea to have some background in philosophy and in 17th century English history going in, though, because it turns out a lot of the struggle — and particularly the bitterness with which Hobbes and Wallis fought, for decades — ties into the religious and political struggles of England of the 1600s.

Hobbes’s project, I better understand now, was not merely the squaring of the circle or the solving of other ancient geometric problems like the doubling of the cube or the trisecting of an arbitrary angle, although he did claim to have various proofs or approximate proofs of them. He seems to have been interested in building a geometry on more materialist grounds, more directly as models of the real world, instead of the pure abstractions that held sway then (and, for that matter, now). This is not by itself a ridiculous thing to do: we are almost always better off for having multiple independent ways to construct something, because the differences in those ways teaches us not just about the thing, but about the methods we use to discover things. And purely abstract constructions have problems also: for example, if a line can be decomposed into nothing but an enormous number of points, and absolutely none of those points has any length, then how can the line have length? You can answer that, but it’s going to require a pretty long running start.

Trying to re-build the logical foundations of mathematics is an enormously difficult thing to do, and it’s not surprising that someone might fail to do so perfectly. Whole schools of mathematicians might be needed just to achieve mixed success. And Hobbes wasn’t able to attract whole schools of mathematicians, in good part because of who he was.

Hobbes achieved immortality as an important philosopher with the publication of Leviathan. What I had not appreciated and Jesseph made clear was that in the context of England of the 1650s, Hobbes’s views on the natures of God, King, Society, Law, and Authority managed to offend — in the “I do not know how I can continue to speak with a person who holds views like that” — pretty much everybody in England who had any strong opinion about anything in politics, philosophy, or religion. I do not know for a fact that Hobbes then went around kicking the pet dogs of any English folk who didn’t have strong opinions about politics, philosophy, or religion, but I can’t rule it out. At least part of the relentlessness and bitterness with which Wallis (and his supporters) attacked Hobbes, and with which Hobbes (and his supporters) attacked back, can be viewed as a spinoff of the great struggle between the Crown and Parliament that produced the Civil War, the Commonwealth, and the Restoration, and in that context it’s easier to understand why all parties carried on, often quibbling about extremely minor points, well past the point that their friends were advising them that the quibbling was making themselves look bad. Hobbes was a difficult person to side with, even when he was right, and a lot of his mathematics just wasn’t right. Some of it I’m not sure ever could be made right, however many ingenious people you had working to avoid flaws.

An amusing little point that Jesseph quotes is a bit in which Hobbes, making an argument about the rights that authority has, asserts that if the King decreed that Euclid’s Fifth Postulate should be taught as false, then false it would be in the kingdom. The Fifth Postulate, also known as the Parallel Postulate, is one of the axioms on which classical Greek geometry was built and it was always the piece that people didn’t like. The other postulates are all nice, simple, uncontroversial, common-sense things like “all right angles are equal”, the kinds of things so obvious they just have to be axioms. The Fifth Postulate is this complicated-sounding thing about how, if a line is crossed by two non-parallel lines, you can determine on which side of the first line the non-parallel lines will meet.

It wouldn’t be really understood or accepted for another two centuries, but, you can suppose the Fifth Postulate to be false. This gives you things named “non-Euclidean geometries”, and the modern understanding of the universe’s geometry is non-Euclidean. In picking out an example of something a King might decree and the people would have to follow regardless of what was really true, Hobbes picked out an example of something that could be decreed false, and that people could follow profitably.

That’s not mere ironical luck, probably. A streak of mathematicians spent a long time trying to prove the Fifth Postulate was unnecessary, at least, by showing it followed from the remaining and non-controversial postulates, or at least that it could be replaced with something that felt more axiomatic. Of course, in principle you can use any set of axioms you like to work, but some sets produce more interesting results than others. I don’t know of any interesting geometry which results from supposing “not all right angles are equal”; supposing that the Fifth Postulate is untrue gives us general relativity, which is quite nice to have.

Again I have to warn that Jesseph’s book is not always easy reading. I had to struggle particularly over some of the philosophical points being made, because I’ve got only a lay understanding of the history of philosophy, and I was able to call on my love (a professional philosopher) for help at points. I imagine someone well-versed in philosophy but inexperienced with mathematics would have a similar problem (although — don’t let the secret out — you’re allowed to just skim over the diagrams and proofs and go on to the explanatory text afterwards). But for people who want to understand the scope and meaning of the fighting better, or who just want to read long excerpts of the wonderful academic insulting that was current in the era, I do recommend it. Check your local college or university library.

Reading the Comics, November 20, 2014: Ancient Events Edition


I’ve got enough mathematics comics for another roundup, and this time, the subjects give me reason to dip into ancient days: one to the most famous, among mathematicians and astronomers anyway, of Greek shipwrecks, and another to some point in the midst of winter nearly seven thousand years ago.

Eric the Circle (November 15) returns “Griffinetsabine” to the writer’s role and gives another “Shape Single’s Bar” scene. I’m amused by Eric appearing with his ex: x is practically the icon denoting “this is an algebraic expression”, while geometry … well, circles are good for denoting that, although I suspect that triangles or maybe parallelograms are the ways to denote “this is a geometric expression”. Maybe it’s the little symbol for a right angle.

Jim Meddick’s Monty (November 17) presents Monty trying to work out just how many days there are to Christmas. This is a problem fraught with difficulties, starting with the obvious: does “today” count as a shopping day until Christmas? That is, if it were the 24th, would you say there are zero or one shopping days left? Also, is there even a difference between a “shopping day” and a “day” anymore now that nobody shops downtown so it’s only the stores nobody cares about that close on Sundays? Sort all that out and there’s the perpetual problem in working out intervals between dates on the Gregorian calendar, which is that you have to be daft to try working out intervals between dates on the Gregorian calendar. The only worse thing is trying to work out the intervals between Easters on it. My own habit for this kind of problem is to use the United States Navy’s Julian Date conversion page. The Julian date is a straight serial number, counting the number of days that have elapsed since noon Universal Time at what’s called the 1st of January, 4713 BCE, on the proleptic Julian calendar (“proleptic” because nobody around at the time was using, or even imagined, the calendar, but we can project back to what date that would have been), a year picked because it’s the start of several astronomical cycles, and it’s way before any specific recordable dates in human history, so any day you might have to particularly deal with has a positive number. Of course, to do this, we’re transforming the problem of “counting the number of days between two dates” to “counting the number of days between a date and January 1, 4713 BCE, twice”, but the advantage of that is, the United States Navy (and other people) have worked out how to do that and we can use their work.

Bill Hind’s kids-sports comic Cleats (November 19, rerun) presents Michael offering basketball advice that verges into logic and set theory problems: making the ball not go to a place outside the net is equivalent to making the ball go inside the net (if we decide that the edge of the net counts as either inside or outside the net, at least), and depending on the problem we want to solve, it might be more convenient to think about putting the ball into the net, or not putting the ball outside the net. We see this, in logic, in a set of relations called De Morgan’s Laws (named for Augustus De Morgan, who put these ideas in modern mathematical form), which describe what kinds of descriptions — “something is outside both sets A and B at one” or “something is not inside set A or set B”, or so on — represent the same relationship between the thing and the sets.

Tom Thaves’s Frank and Ernest (November 19) is set in the classic caveman era, with prehistoric Frank and Ernest and someone else discovering mathematics and working out whether a negative number times a negative number might be positive. It’s not obvious right away that they should, as you realize when you try teaching someone the multiplication rules including negative numbers, and it’s worth pointing out, a negative times a negative equals a positive because that’s the way we, the users of mathematics, have chosen to define negative numbers and multiplication. We could, in principle, have decided that a negative times a negative should give us a negative number. This would be a different “multiplication” (or a different “negative”) than we use, but as long as we had logically self-consistent rules we could do that. We don’t, because it turns out negative-times-negative-is-positive is convenient for problems we like to do. Mathematics may be universal — something following the same rules we do has to get the same results we do — but it’s also something of a construct, and the multiplication of negative numbers is a signal of that.

Goofy sees the message 'buried treasure in back yard' in his alphabet soup; what are the odds of that?
The Mickey Mouse comic rerun the 20th of November, 2014.

Mickey Mouse (November 20, rerun) — I don’t know who wrote or draw this, but Walt Disney’s name was plastered onto it — sees messages appearing in alphabet soup. In one sense, such messages are inevitable: jumble and swirl letters around and eventually, surely, any message there are enough letters for will appear. This is very similar to the problem of infinite monkeys at typewriters, although with the special constraint that if, say, the bowl has only two letters “L”, it’s impossible to get the word “parallel”, unless one of the I’s is doing an impersonation. Here, Goofy has the message “buried treasure in back yard” appear in his soup; assuming those are all the letters in his soup then there’s something like 44,881,973,505,008,615,424 different arrangements of letters that could come up. There are several legitimate messages you could make out of that (“treasure buried in back yard”, “in back yard buried treasure”), not to mention shorter messages that don’t use all those letters (“run back”), but I think it’s safe to say the number of possible sentences that make sense are pretty few and it’s remarkable to get something like that. Maybe the cook was trying to tell Goofy something after all.

Mark Anderson’s Andertoons (November 20) is a cute gag about the dangers of having too many axes on your plot.

Gary Delainey and Gerry Rasmussen’s Betty (November 20) mentions the Antikythera Mechanism, one of the most famous analog computers out there, and that’s close enough to pure mathematics for me to feel comfortable including it here. The machine was found in April 1900, in ancient shipwreck, and at first seemed to be just a strange lump of bronze and wood. By 1902 the archeologist Valerios Stais noticed a gear in the mechanism, but since it was believed the wreck far, far predated any gear mechanisms, the machine languished in that strange obscurity that a thing which can’t be explained sometimes suffers. The mechanism appears to be designed to be an astronomical computer, tracking the positions of the Sun and the Moon — tracking the actual moon rather than an approximate mean lunar motion — the rising and etting of some constellations, solar eclipses, several astronomical cycles, and even the Olympic Games. It’s an astounding mechanism, it’s mysterious: who made it? How? Are there others? What happened to them? How was the mechanical engineering needed for this developed, and what other projects did the people who created this also do? Any answers to these questions, if we ever know them, seem sure to be at least as amazing as the questions are.

Some Stuff About Edmond Halley


When I saw the Maths History tweet about Edmond Halley’s birthday I wondered if the November 8th date given was the relevant one since, after all, in 1656 England was still on the Julian calendar. The MacTutor biography of him makes clear that the 8th of November is his Gregorian-date birthday, and he was born on the 29th of October by the calendar his parents were using, although it’s apparently not clear he was actually born in 1656. Halley claimed it was 1656, at least, and he probably heard from people who knew.

Halley is famous for working out the orbit of the comet that’s gotten his name attached, and correctly so: working out the orbits of comets was one of the first great accomplishments of Newtonian mechanics, and Halley’s work took into account how Jupiter’s gravitation distorts the orbit of a comet. It’s great work. And he’s also famous within mathematical and physics circles because it’s fair to wonder whether, without his nagging and his financial support, Isaac Newton would have published his Principia Mathematica. Astronomers note him as the first Western European astronomer to set up shop in the southern hemisphere and produce a map of that part of the sky, as well.

That hardly exhausts what’s interesting about him: for example, he joined in the late-17th-century fad for diving bell companies (for a while, you couldn’t lose money excavating wrecked ships, until finally everyone did) and even explored the bed of the English Channel in a diving bell of his own design. This is to me the most terrifying thing he did, and that’s even with my awareness he led two scientific sailing expeditions, one of which was cut short after among other things irreconcilable differences with the ship’s other commissioned officer, Lieutenant Edward Harrison (who blamed Halley for the oblivion which Harrison’s book on longitude received), and the second of which included a pause in Recife when Halley was put under guard by a man claiming to be the English consul, and who was actually an agent of the Royal African Company considering whether to seize Halley’s ship[1] as a prize.

After his second expedition Halley published charts showing the magnetic declination, how far a magnetic compass’s “north” is from true north, and introduced one of those great conceptual breakthroughs that charts can give us: he connected the lines showing the points where the declination was equal. These isolines are a magnificent way to diagram three-dimensional information on a two-dimensional chart; we see them in topographic maps, as the contour curves showing where a hill rises or a valley sinks. We see them in weather maps, the lines where the temperature is 70 or 80 Fahrenheit (or 20 or 25 Celsius, if you rather) or where the wind speed is some sufficiently alarming figure. We see them (in three-dimensional form) in medical imaging, where a region of constant density gets the same color and this is used to understand a complicated shape within. Not all these uses derive directly from Halley; as with all really good, widely usable concepts many people discovered the concept, but Halley was among the first to put them to obvious, prominent use.

And something that might serve as comfort to anyone who’s taking a birthday hard: at age 65, Halley began a study of the moon’s saros, the cycle patterns of different relative positions the Sun and Moon have in the sky which describe when eclipses happen. One cycle takes a bit over eighteen years to complete. Halley lived long enough to complete this work.

[1] The Paramore, which — I note because this is just the kind of world it was back then — was constructed in 1694 at the Royal Dockyard at Deptford on the River Thames for a scientific circumnavigation of the globe, and first sailed in April 1698 under Tsar Peter the Great, then busy travelling western Europe under ineffective cover to learn things which might modernize Russia. Halley had hoped to sail in 1696, but he was waylaid by his appointment to the Mint at Chester, courtesy of Newton.

Reading the Comics, October 7, 2014: Repeated Comics Edition


Since my last roundup of mathematics-themed comic strips there’s been a modest drizzle of new ones, and I’m not sure that I can find any particular themes to them, except that Zach Weinersmith and the artistic collective behind Eric the Circle apparently like my attention. Well, what the heck; that’s easy enough to give.

Zach Weinersmith’s Saturday Morning Breakfast Cereal (September 29) hopes to be that guy who appears somewhere around the fourth comment of every news article ever that mentions a correlation being found between two quantities. A lot of what’s valuable about science is finding causal links between things, but it’s only in rare and, often, rather artificial circumstances that such links are easy to show. What’s more often necessary is showing that as one quantity changes so does another, which allows one to suspect a link. Then, typically, one would look for a plausible reason they might have anything to do with one another, and look for ways to experiment and prove whether there is or is not.

But just because there is a correlation doesn’t by itself mean that one thing necessarily has anything to do with another. They could be coincidence, for example, or they could be influenced by some other confounding factor. To be worth mention in a decent journal, a correlation is probably going to be strong enough that it’s hard to believe it’s just coincidence, but there could yet be some confounding factor. And even if there is a causal link, in the complicated mess that is reality it can be difficult to discern which way the link flows. This is summarized in deductive logic by saying that correlation does not imply causation, but that uses deductive logic’s definition of “imply”.

In deductive logic to say “this implies that” means it is impossible for “this” to be true and “that” false simultaneously. It is perfectly permissible for both “this” and “that” to be true, and permissible for “this” to be false and “that” false, and — this is the point where Intro to Logic students typically crash — permissible for “this” to be false and “that” true. Colloquially, though, “imply” has a different connotation, something more along the lines of “this” and “that” have to both be false or both be true together. Don’t make that mistake on your logic test.

When a logician says that correlation does not imply causation, she is saying that it is imaginable for the correlation to be true while the causation is false. She is not saying the causation is false; she is just saying that the case is not proved from the fact of a correlation being true. And that’s so; if we just knew two things were correlated we would have to experiment to find whether there is a causal link. But finding a correlation one of the ways to start finding casual links; it’d be obviously daft not to use them as the start of one’s search. Anyway, that guy in about the fourth comment of every news report about a correlation just wants you to know it’s very important he tell you he’s smarter than journalists.

Saturday Morning Breakfast Cereal pops back up again (October 1) with an easier-to-describe joke about August Ferdinand Möbius and his rather famous strip, here applied to the old gag about walking to school uphill both ways. One hates to be a spoilsport, but Möbius was educated at home until 13, so this comic is not reliable as a shorthand biography of the renowned mathematician.

Eric the Circle has had a couple strips by “Griffinetsabine”, one on October 3, and another on the 7th of October, based on the Shape Singles Bar. Both strips are jokes about two points connecting by a line, suggesting that Griffinetsabine knew the premise was good for a couple of variants. I’d have spaced out the publication of them farther but perhaps this was the best that could be done.

Mikael Wulff and Anders Morgenthaler’s Truth Facts (September 30) — a panel strip that’s often engaging in showing comic charts — gives a guide to what the number of digits you’ve memorized says about you. (For what it’s worth, I peter out at “897932”.) I’m mildly delighted to find that their marker for Isaac Newton is more or less correct: Newton did work out pi to fifteen decimal places, by using his binomial theorem and a calculation of the area within a particular wedge of the circle. (As I make it out Wulff and Morgenthaler put Newton at fourteen decimal points, but they might have read references to Newton working out “fifteen decimal points” as meaning something different to what I do.) Newton’s was not the best calculation of pi in the 1660s when he worked it out — Christoph Grienberger, an Austrian Jesuit astronomer, had calculated 38 decimal places a generation earlier — but I can’t blame Wulff and Morgenthaler for supposing Newton to be a more recognizable name than Grienberger. I imagine if Einstein or Stephen Hawking had done any particularly unique work in calculating the digits of pi they’d have appeared on the chart too.

John Graziano’s Ripley’s Believe It or Not (October 1) — and don’t tell me that attribution doesn’t look weird — shares a story about the followers of the Ancient Greek mathematician, philosopher, and mystic Pythagoras, that they were forbidden to wear wool, eat beans, or pick up things they had dropped. I have heard the beans thing before and I think I’ve heard the wool prohibition before, but I don’t remember hearing about them not being able to pick up things before.

I’m not sure I can believe it, though: Pythagoras was a strange fellow, so far as the historical record is clear. It’s hard to be sure just what is true about him and his followers, though, and what is made up, either out of devoted followers building up the figure they admire or out of critics making fun of a strange fellow with his own little cult. Perhaps it’s so, perhaps it’s not. I would like to see a primary source, and I don’t think any exist.

The Little King skates a figure 8 that requires less tricky curving.
Otto Soglow’s The Little King for 29 February 1948.

Otto Soglow’s The Little King (October 5; originally run February 29, 1948) provides its normal gentle, genial humor in the Little King working his way around the problem of doing a figure 8.

Machines That Give You Logarithms


As I’ve laid out the tools that the Harvard IBM Automatic Sequence-Controlled Calculator would use to work out a common logarithm, now I can show how this computer of the 1940s and 1950s would do it. The goal, remember, is to compute logarithms to a desired accuracy, using computers that haven’t got abundant memory, and as quickly as possible. As quickly as possible means, roughly, avoiding multiplication (which takes time) and doing as few divisions as can possibly be done (divisions take forever).

As a reminder, the tools we have are:

  1. We can work out at least some logarithms ahead of time and look them up as needed.
  2. The natural logarithm of a number close to 1 is log_e\left(1 + h\right) = h - \frac12h^2 + \frac13h^3 - \frac14h^4 + \frac15h^5 - \cdots .
  3. If we know a number’s natural logarithm (base e), then we can get its common logarithm (base 10): multiply the natural logarithm by the common logarithm of e, which is about 0.43429.
  4. Whether the natural or the common logarithm (or any other logarithm you might like) \log\left(a\cdot b\cdot c \cdot d \cdots \right) = \log(a) + \log(b) + \log(c) + \log(d) + \cdots

Now we’ll put this to work. The first step is which logarithms to work out ahead of time. Since we’re dealing with common logarithms, we only need to be able to work out the logarithms for numbers between 1 and 10: the common logarithm of, say, 47.2286 is one plus the logarithm of 4.72286, and the common logarithm of 0.472286 is minus two plus the logarithm of 4.72286. So we’ll start by working out the logarithms of 1, 2, 3, 4, 5, 6, 7, 8, and 9, and storing them in what, in 1944, was still a pretty tiny block of memory. The original computer using this could store 72 numbers at a time, remember, though to 23 decimal digits.

So let’s say we want to know the logarithm of 47.2286. We have to divide this by 10 in order to get the number 4.72286, which is between 1 and 10, so we’ll need to add one to whatever we get for the logarithm of 4.72286 is. (And, yes, we want to avoid doing divisions, but dividing by 10 is a special case. The Automatic Sequence-Controlled Calculator stored numbers, if I am not grossly misunderstanding things, in base ten, and so dividing or multiplying by ten was as fast for it as moving the decimal point is for us. Modern computers, using binary arithmetic, find it as fast to divide or multiply by powers of two, even though division in general is a relatively sluggish thing.)

We haven’t worked out what the logarithm of 4.72286 is. And we don’t have a formula that’s good for that. But: 4.72286 is equal to 4 times 1.1807, and therefore the logarithm of 4.72286 is going to be the logarithm of 4 plus the logarithm of 1.1807. We worked out the logarithm of 4 ahead of time (it’s about 0.60206, if you’re curious).

We can use the infinite series formula to get the natural logarithm of 1.1807 to as many digits as we like. The natural logarithm of 1.1807 will be about 0.1807 - \frac12 0.1807^2 + \frac13 0.1807^3 - \frac14 0.1807^4 + \frac15 0.1807^5 - \cdots or 0.16613. Multiply this by the logarithm of e (about 0.43429) and we have a common logarithm of about 0.07214. (We have an error estimate, too: we’ve got the natural logarithm of 1.1807 within a margin of error of \frac16 0.1807^6 , or about 0.000 0058, which, multiplied by the logarithm of e, corresponds to a margin of error for the common logarithm of about 0.000 0025.

Therefore: the logarithm of 47.2286 is about 1 plus 0.60206 plus 0.07214, which is 1.6742. And it is, too; we’ve done very well at getting the number just right considering how little work we really did.

Although … that infinite series formula. That requires a fair number of multiplications, at least eight as I figure it, however you look at it, and those are sluggish. It also properly speaking requires divisions, although you could easily write your code so that instead of dividing by 4 (say) you multiply by 0.25 instead. For this particular example number of 47.2286 we didn’t need very many terms in the series to get four decimal digits of accuracy, but maybe we got lucky and some other number would have required dozens of multiplications. Can we make this process, on average, faster?

And here’s one way to do it. Besides working out the common logarithms for the whole numbers 1 through 9, also work out the common logarithms for 1.1, 1.2, 1.3, 1.4, et cetera up to 1.9. And then …

We started with 47.2286. Divide by 10 (a free bit of work) and we have 4.72286. Divide 4.72286 is 4 times 1.180715. And 1.180715 is equal to 1.1 — the whole number and the first digit past the decimal — times 1.07337. That is, 47.2286 is 10 times 4 times 1.1 times 1.07337. And so the logarithm of 47.2286 is the logarithm of 10 plus the logarithm of 4 plus the logarithm of 1.1 plus the logarithm of 1.07337. We are almost certainly going to need fewer terms in the infinite series to get the logarithm of 1.07337 than we need for 1.180715 and so, at the cost of one more division, we probably save a good number of multiplications.

The common logarithm of 1.1 is about 0.041393. So the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) is 1.6435, which falls a little short of the actual logarithm we’d wanted, about 1.6742, but two or three terms in the infinite series should be enough to make that up.

Or we could work out a few more common logarithms ahead of time: those for 1.01, 1.02, 1.03, and so on up to Our original 47.2286 divided by 10 is 4.72286. Divide that by the first number, 4, and you get 1.180715. Divide 1.180715 by 1.1, the first two digits, and you get 1.07337. Divide 1.07337 by 1.07, the first three digits, and you get 1.003156. So 47.2286 is 10 times 4 times 1.1 times 1.07 times 1.003156. So the common logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (about 0.02938) plus the logarithm of 1.003156 (to be determined). Even ignoring the to-be-determined part that adds up to 1.6728, which is a little short of the 1.6742 we want but is doing pretty good considering we’ve reduced the whole problem to three divisions, looking stuff up, and four additions.

If we go a tiny bit farther, and also have worked out ahead of time the logarithms for 1.001, 1.002, 1.003, and so on out to 1.009, and do the same process all over again, then we get some better accuracy and quite cheaply yet: 47.2286 divided by 10 is 4.72286. 4.72286 divided by 4 is 1.180715. 1.180715 divided by 1.1 is 1.07337. 1.07337 divided by 1.07 is 1.003156. 1.003156 divided by 1.003 is 1.0001558.

So the logarithm of 47.2286 is the logarithm of 10 (1) plus the logarithm of 4 (0.60206) plus the logarithm of 1.1 (0.041393) plus the logarithm of 1.07 (0.029383) plus the logarithm of 1.003 (0.001301) plus the logarithm of 1.001558 (to be determined). Leaving aside the to-be-determined part, that adds up to 1.6741.

And the to-be-determined part is great: if we used just a single term in this series, the margin for error would be, at most, 0.000 000 0052, which is probably small enough for practical purposes. The first term in the to-be-determined part is awfully easy to calculate, too: it’s just 1.0001558 – 1, that is, 0.0001558. Add that and we have an approximate logarithm of 1.6742, which is dead on.

And I do mean dead on: work out more decimal places of the logarithm based on this summation and you get 1.674 205 077 226 78. That’s no more than five billionths away from the correct logarithm for the original 47.2286. And it required doing four divisions, one multiplication, and five additions. It’s difficult to picture getting such good precision with less work.

Of course, that’s done in part by having stockpiled a lot of hard work ahead of time: we need to know the logarithms of 1, 1.1, 1.01, 1.001, and then 2, 1.2, 1.02, 1.002, and so on. That’s 36 numbers altogether and there are many ways to work out logarithms. But people have already done that work, and we can use that work to make the problems we want to do considerably easier.

But there’s the process. Work out ahead of time logarithms for 1, 1.1, 1.01, 1.001, and so on, to whatever the limits of your patience. Then take the number whose logarithm you want and divide (or multiply) by ten until you get your working number into the range of 1 through 10. Divide out the first digit, which will be a whole number from 1 through 9. Divide out the first two digits, which will be something from 1.1 to 1.9. Divide out the first three digits, something from 1.01 to 1.09. Divide out the first four digits, something from 1.001 to 1.009. And so on. Then add up the logarithms of the power of ten you divided or multiplied by with the logarithm of the first divisor and the second divisor and third divisor and fourth divisor, until you run out of divisors. And then — if you haven’t already got the answer as accurately as you need — work out as many terms in the infinite series as you need; probably, it won’t be very many. Add that to your total. And you are, amazingly, done.

Machines That Think About Logarithms


I confess that I picked up Edmund Callis Berkeley’s Giant Brains: Or Machines That Think, originally published 1949, from the library shelf as a source of cheap ironic giggles. After all, what is funnier than an attempt to explain to a popular audience that, wild as it may be to contemplate, electrically-driven machines could “remember” information and follow “programs” of instructions based on different conditions satisfied by that information? There’s a certain amount of that, though not as much as I imagined, and a good amount of descriptions of how the hardware of different partly or fully electrical computing machines of the 1940s worked.

But a good part, and the most interesting part, of the book is about algorithms, the ways to solve complicated problems without demanding too much computing power. This is fun to read because it showcases the ingenuity and creativity required to do useful work. The need for ingenuity will never leave us — we will always want to compute things that are a little beyond our ability — but to see how it’s done for a simple problem is instructive, if for nothing else to learn the kinds of tricks you can do to get the most of your computing resources.

The example that most struck me and which I want to share is from the chapter on the IBM Automatic Sequence-Controlled Calculator, built at Harvard at a cost of “somewhere near 3 or 4 hundred thousand dollars, if we leave out some of the cost of research and development, which would have been done whether or not this particular machine had ever been built”. It started working in April 1944, and wasn’t officially retired until 1959. It could store 72 numbers, each with 23 decimal digits. Like most computers (then and now) it could do addition and subtraction very quickly, in the then-blazing speed of about a third of a second; it could do multiplication tolerably quickly, in about six seconds; and division, rather slowly, in about fifteen seconds.

The process I want to describe is the taking of logarithms, and why logarithms should be interesting to compute takes a little bit of justification, although it’s implicitly there just in how fast calculations get done. Logarithms let one replace the multiplication of numbers with their addition, for a considerable savings in time; better, they let you replace the division of numbers with subtraction. They further let you turn exponentiation and roots into multiplication and division, which is almost always faster to do. Many human senses seem to work on a logarithmic scale, as well: we can tell that one weight is twice as heavy as the other much more reliably than we can tell that one weight is four pounds heavier than the other, or that one light is twice as bright as the other rather than is ten lumens brighter.

What the logarithm of a number is depends on some other, fixed, quantity, known as the base. In principle any positive number will do as base; in practice, these days people mostly only care about base e (which is a little over 2.718), the “natural” logarithm, because it has some nice analytic properties. Back in the day, which includes when this book was written, we also cared about base 10, the “common” logarithm, because we mostly work in base ten. I have heard of people who use base 2, but haven’t seen them myself and must regard them as an urban legend. The other bases are mostly used by people who are writing homework problems for the part of the class dealing with logarithms. To some extent it doesn’t matter what base you use. If you work out the logarithm in one base, you can convert that to the logarithm in another base by a multiplication.

The logarithm of some number in your base is the exponent you have to raise the base to to get your desired number. For example, the logarithm of 100, in base 10, is going to be 2 because 102 is 100, and the logarithm of e1/3 (a touch greater than 1.3956), in base e, is going to be 1/3. To dig deeper in my reserve of in-jokes, the logarithm of 2038, in base 10, is approximately 3.3092, because 103.3092 is just about 2038. The logarithm of e, in base 10, is about 0.4343, and the logarithm of 10, in base e, is about 2.303. Your calculator will verify all that.

All that talk about “approximately” should have given you some hint of the trouble with logarithms. They’re only really easy to compute if you’re looking for whole powers of whatever your base is, and that if your base is 10 or 2 or something else simple like that. If you’re clever and determined you can work out, say, that the logarithm of 2, base 10, has to be close to 0.3. It’s fun to do that, but it’ll involve such reasoning as “two to the tenth power is 1,024, which is very close to ten to the third power, which is 1,000, so therefore the logarithm of two to the tenth power must be about the same as the logarithm of ten to the third power”. That’s clever and fun, but it’s hardly systematic, and it doesn’t get you many digits of accuracy.

So when I pick up this thread I hope to explain one way to produce as many decimal digits of a logarithm as you could want, without asking for too much from your poor Automatic Sequence-Controlled Calculator.

The Geometry of Thermodynamics (Part 1)


I should mention that Peter Mander’s Carnot Cycle blog has a fine entry, “The Geometry of Thermodynamics (Part I)” which admittedly opens with a diagram that looks like the sort of thing you create when you want to present a horrifying science diagram. That’s a bit of flavor.

Mander writes about part of what made J Willard Gibbs probably the greatest theoretical physicist that the United States has yet produced: Gibbs put much of thermodynamics into a logically neat system, the kind we still basically use today, and all the better saw represent it and understand it as a matter of surface geometries. This is an abstract kind of surface — looking at the curve traced out by, say, mapping the energy of a gas against its volume, or its temperature versus its entropy — but if you can accept the idea that we can draw curves representing these quantities then you get to use your understanding how how solid objects (and Gibbs even got made solid objects — James Clerk Maxwell, of Maxwell’s Equations fame, even sculpted some) look and feel.

This is a reblogging of only part one, although as Mander’s on summer holiday you haven’t missed part two.

carnotcycle

1geo1

Volume One of the Scientific Papers of J. Willard Gibbs, published posthumously in 1906, is devoted to Thermodynamics. Chief among its content is the hugely long and desperately difficult “On the equilibrium of heterogeneous substances (1876, 1878)”, with which Gibbs single-handedly laid the theoretical foundations of chemical thermodynamics.

In contrast to James Clerk Maxwell’s textbook Theory of Heat (1871), which uses no calculus at all and hardly any algebra, preferring geometry as the means of demonstrating relationships between quantities, Gibbs’ magnum opus is stuffed with differential equations. Turning the pages of this calculus-laden work, one could easily be drawn to the conclusion that the writer was not a visual thinker.

But in Gibbs’ case, this is far from the truth.

The first two papers on thermodynamics that Gibbs published, in 1873, were in fact visually-led. Paper I deals with indicator diagrams and their comparative properties, while Paper II

View original post 1,490 more words

Reading the Comics, May 26, 2014: Definitions Edition


The most recent bunch of mathematics-themed comics left me feeling stumped for a theme. There’s no reason they have to have one, of course; cartoonists, as far as I know, don’t actually take orders from Comic Strip Master Command regarding what to write about, but often they seem to. Some of them seem to touch on definitions, at least, including of such ideas as the value of a quantity and how long it is between two events. I’ll take that.

Jef Mallet’s Frazz (May 23) does the kid-resisting-the-question sort of joke (not a word problem, for a change of pace), although I admit I didn’t care for the joke. I needed too long to figure out how the meaning of “value” for a variable might be ambiguous. Caulfield kind of has a point about mathematics needing to use precise words, but the process of making a word precise is a great and neglected part of mathematical history. Consider, for example: contemporary (English-language, at least) mathematicians define a prime number to be a counting number (1, 2, 3, et cetera) with exactly two factors. Why exactly two factors, except to rule out 1 as a prime number? But then why rule that 1 can’t be a prime number? As an idea gets used and explored we get a better idea of what’s interesting about it, and what it’s useful for, and can start seeing whether some things should be ruled out as not fitting a concept we want to describe, or be accepted as fitting because the concept is too useful otherwise and there’s no clear way to divide what we want from what we don’t.

I still can’t buy Caulfield’s proposition there, though.

Steve Boreman’s Little Dog Lost (May 25) circles around a bunch of mathematical concepts without quite landing on any of them. The obvious thing is the counting ability of animals: the crow asserts that crows can only count as high as nine, for example, and the animals try to work out ways to deal with the very large number of 2,615. The vulture asserts he’s been waiting for 2,615 days for the Little Dog to cross the road, and wonders how many years that’s been. The first installment of the strip, from the 26th of March, 2007, did indeed feature Vulture waiting for Little Dog to cross the road, although as I make it out there’s 2,617 days between those events.

At a guess, either Boreman was not counting the first and the last days of the interval between March 26, 2007, and May 25, 2014, or maybe he forgot the leap days. Finding how long it is between dates is a couple of kinds of messes, first because it isn’t necessarily clear whether to include the end dates, and second because the Gregorian calendar is a mess of months of varying lengths plus the fun of leap years, which include an exception for century years and an exception to the exception, making it all the harder. My preferred route for finding intervals is to not even try working the time out by myself, and instead converting every date to the Julian date, a simple serial count of the number of dates since noon Universal Time on the 1st of January, 4713 BC, on the Julian calendar. Let the Navy deal with leap days. I have better things to worry about.

Samson’s Dark Side Of The Horse (May 26) sees Horace trying to count sheep to get himself to sleep; different ways of denoting numbers confound him. I’m not sure if it’s known why counting sheep, or any task like that, is useful in getting to sleep. My guess would be that it just falls into the sort of activity that can be done without a natural endpoint and without demanding too much attention to keep one awake, while demanding enough attention that one isn’t thinking about the bank account or the noise inside the walls or the way the car lurches two lanes to the right every time one taps the brake at highway speeds. That’s a guess, though.

Tom Horacek’s Foolish Mortals (May 26) uses the “on a scale of one to ten” standard for something that’s not usually described so vaguely, and I like the way it teases the idea of how to measure things. The “scale of one to ten” is logically flawed, since we have no idea what the units are, how little of something one represents or how much the ten does, or even whether it’s a linear scale — the difference between “two” and “three” is the same as that between “three” and “four”, the way lengths and weight work — or a logarithmic one — the ratio between “two” and “three” equals that between “three” and “four”, the way stellar magnitudes, decibel sound readings, and Richter scale earthquake intensity measure work — or, for that matter, what normal ought to be. And yet there’s something useful in making the assessment, surely because the first step towards usefully quantifying a thing is to make a clumsy and imprecise quantification of it.

Dave Blazek’s Loose Parts (May 26) kind of piles together a couple references so a character can identify himself as a double major in mathematics and theology. Of course, the generic biography for a European mathematician, between about the end of the Western Roman Empire and the Industrial Revolution, is that he (males most often had the chance to do original mathematics) studied mathematics alongside theology and philosophy, and possibly astronomy, although that reflects more how the subjects were seen as rather intertwined, and education wasn’t as specialized and differentiated as it’s now become. (The other generic mathematician would be the shopkeeper or the exchequer, but nobody tells jokes about their mathematics.)

And, finally, Doug Savage’s Savage Chickens (May 28) brings up the famous typing monkeys (here just the one of them), and what really has to be counted as a bit of success for the project.

George Berkeley’s 329th Birthday


The stream of mathematics-trivia tweets brought to my attention that the 12th of March, 1685 [1], was the birthday of George Berkeley, who’d become the Bishop of Cloyne and be an important philosopher, and who’s gotten a bit of mathematical immortality for complaining about calculus. Granted everyone who takes it complains about calculus, but Berkeley had the good sorts of complaints, the ones that force people to think harder and more clearly about what they’re doing.

Berkeley — whose name I’m told by people I consider reliable was pronounced “barkley” — particularly protested the “fluxions” of calculus as it was practiced in the day in his 1734 tract The Analyst: Or A Discourse Addressed To An Infidel Mathematician, which as far as I know nobody I went to grad school with ever read either, so maybe you shouldn’t bother reading what I have to say about them.

Fluxions were meant to represent infinitesimally small quantities, which could be added to or subtracted from a number without changing the number, but which could be divided by one another to produce a meaningful answer. That’s a hard set of properties to quite rationalize — if you can add something to a number without changing the number, you’re adding zero; and if you’re dividing zero by zero you’re not doing division anymore — and yet calculus was doing just that. For example, if you want to find the slope of a curve at a single point on the curve you’d take the x- and y-coordinates of that point, and add an infinitesimally small number to the x-coordinate, and see how much the y-coordinate has to change to still be on the curve, and then divide those changes, which are too small to even be numbers, and get something out of it.

It works, at least if you’re doing the calculations right, and Berkeley supposed that it was the result of multiple logical errors cancelling one another out that they did work; but he termed these fluxions with spectacularly good phrasing “ghosts of departed quantities”, and it would take better than a century to put all his criticisms quite to rest. The result we know as differential calculus.

I should point out that it’s not as if mathematicians playing with their shiny new calculus tools were being irresponsible in using differentials and integrals despite Berkeley’s criticisms. Mathematical concepts work a good deal like inventions, in that it’s not clear what is really good about them until they’re used, and it’s not clear what has to be made better until there’s a body of experience working with them and seeing where the flaws. And Berkeley was hardly being unreasonable for insisting on logical rigor in mathematics.

[1] Berkeley was born in Ireland. I have found it surprisingly hard to get a clear answer about when Ireland switched from the Julian to the Gregorian calendar, so I have no idea whether this birthdate is old style or new style, and for that matter whether the 1685 represents the civil year or the historical year. Perhaps it suffices to say that Berkeley was born sometime around this time of year, a long while ago.

November 2013’s Statistics


Hi again. I was hesitant to look at this month’s statistics, as I pretty much fell off the face of the earth for a week there, but I didn’t have the chance to do the serious thinking that’s needed for mathematics writing. The result’s almost exactly the dropoff in readership I might have predicted: from 440 views in October down to 308, and from 220 unique visitors down to 158. That’s almost an unchanged number of views per visitor, 2.00 dropping to 1.95, so at least the people still interested in me are sticking around.

The countries sending me the most viewers were as ever the United States, then Austria (hi, Elke, and thank you), the United Kingdom and then Canada. Sending me a single visitor each were Bulgaria, Cyprus, Czech Republic, Ethiopia, France, Jordan, Lebanon, Nepal, New Zealand, Russia, Singapore, Slovenia, Switzerland, and Thailand. This is also a drop in the number of single-viewer countries, although stalwarts Finland and the Netherlands are off the list. Slovenia’s the only country making a repeat appearance from last month, in fact.

The most popular articles the past month were:

And I apologize for not having produced many essays the past couple weeks, and can only fault myself for being more fascinated by some problems in my day job that’ve been taking up time and mental energy and waking me in the middle of the night with stuff I should try. I’ll be back to normal soon, I’m sure. Don’t tell my boss.

Emile Lemoine


Through the MacTutor archive I learn that today’s the birthday of Émile Michel Hyacinthe Lemoine (1840 – 1912), a mathematician I admit I don’t remember hearing of before. His particular mathematical interests were primarily in geometry (though MacTutor notes professionally he became a civil engineer responsible for Paris’s gas supply).

What interests me is that Lemoine looked into the problem of how complicated a proof is, and in just the sort of thing designed to endear him to my heart, he tried to give a concrete measurement of, at least, how involved a geometric construction was. He identified the classic steps in compass-and-straightedge constructions and classified proofs by how many steps these took. MacTutor cites him as showing that the usual solution to the Apollonius problem — construct a circle tangent to three given circles — required over four hundred operations, but that he was able to squeeze that down to 199.

However, well, nobody seems to have been very interested in this classification. That’s probably because the length doesn’t really measure how complicated a proof (or a construction) is: proofs can have a narrative flow, and a proof that involves many steps each of which seems to flow obviously (or which look like the steps in another, already-familiar proof) is going to be easier to read and to understand than one that involves fewer but more obscure steps. This is the sort of thing that challenges attempts to measure how difficult a proof is, even though it’d be interesting to know.

Here’s one of the things that would be served by being able to measure just how long a proof is: a lot of numerical mathematics depends on having sequences of randomly generated numbers, but, showing that you actually have a random sequence of numbers is a deeply hard problem. If you can specify how you get a particular digit … well, they’re not random, then, are they? Unless it’s “use this digit from this randomly generated sequence”. If you could show there’s no way to produce a particular sequence of numbers in any way more efficiently than just reading them off this list of numbers you’d at least have a fair chance at saying this is a truly unpredictable sequence. But, showing that you have found the most efficient algorithm for producing something is … well, it’s difficult to even start measuring that sort of thing, and while Lemoine didn’t produce a very good measure of algorithmic complexity, he did have an idea, and it’s difficult to see how one could get a good measure if one didn’t start with trying not-very-good ones.

Florian Cajori: A History Of Mathematical Notations


I just noticed that over at archive.org they have Volume I of Florian Cajori’s A History Of Mathematical Notations. There’s a fair chance this means nothing to you, but, Dr Cajori did a great deal of work in writing the history of mathematics in the early 20th century, and with a scope and prose style that still leaves me a bit awed. (He also wrote a history of physics; I remember reading the book, originally written in the mid-1920s, with his description of one of the mysteries of the day. With the advantage of decades on my side I knew this to be the Zeeman effect, a way that magnetic fields affect spectral lines.)

Archive.org has several of Cajori’s books, including the histories mentioned, but Mathematical Notations I like as it’s an indispensable reference. It describes, with abundant examples, the origins of all sorts of the ways we write out mathematical ideas, from numerals themselves to the choices of symbols like the + and x signs to how we got to using letters to represent quantities to something called alligation which was apparently practiced in 15th-century Venice.

Unfortunately archive.org hasn’t yet got Volume II, which includes topics like where the $ symbol for United States currency came from — Cajori had some strong opinions about this, suggesting he was tired of tracking down false leads — but it’s a book you can feel confident in leafing through to find something interesting most any time. I think his description of the way historical opinions had changed particularly fascinating, and recommend particularly Paragraph 96 (pages 64 through 68 of the book, and not one enormous block of text), describing “Fanciful hypotheses on the origins of the numeral forms”, many of them based on ideas that the symbols for numbers contain the number of vertices or strokes or some other mnemonic to how big a number is represented. Of those hypothesis formers he says, “Nor did these writers feel that they were indulging simply in pleasing pastimes or merely contributing to mathematical recreations. With perhaps only one exception, they were as convinced of the correctness of their explanations as are circle-squarers of the soundness of their quadratures”.

Dover publishing, of course, reprints the entire book on paper if you want Volumes I and II together. I admit that’s the form I have, and enjoy, since it becomes one of those books you could use to beat off an intruder if need be.

Augustin-Louis Cauchy’s birthday


The Maths History feed on Twitter mentioned that the 21st of August was the birthday of Augustin-Louis Cauchy, who lived from 1789 to 1857. His is one of those names you get to know very well when you’re a mathematics major, since he published 789 papers in his life, and did very well at publishing important papers, ones that established concepts people would actually use.

He’s got an intriguing biography, as he lived (mostly) in France during the time of the Revolution, the Directorate, Napoleon, the Bourbon Restoration, the July Monarchy, the Revolutions of 1848, the Second Republic, and the Second Empire, and had a career which got inextricably tangled with the political upheavals of the era. I note that, according to the MacTutor biography linked to earlier this paragraph, he followed the deposed King Charles X to Prague in order to tutor his grandson, but might not have had the right temperament for it: at least once he got annoyed at the grandson’s confusion and screamed and yelled, with the Queen, Marie Thérèse, sometimes telling him, “too loud, not so loud”. But we’ve all had students that frustrate us.

Cauchy’s name appears on many theorems and principles and definitions of interesting things — I just checked Mathworld and his name returned 124 different items — though I’ll admit I’m stumped how to describe what the Cauchy-Frobenius Lemma is without scaring readers off. So let me talk about something simpler.

Continue reading “Augustin-Louis Cauchy’s birthday”

Why I Say 1/e About This Roller Coaster


The Leap-The-Dips at Lakemont Park, Altoona, Pennsylvania, as photographed by Joseph Nebus in July 2013 from the edge of the launch platform.

So in my head I worked out an estimate of about one in three that any particular board would have remained from the Leap-The-Dips’ original, 1902, configuration, even though I didn’t really believe it. Here’s how I got that figure.

First, you have to take a guess as to how likely it is that any board is going to be replaced in any particular stretch of time. Guessing that one percent of boards need replacing per year sounded plausible, what with how neatly a chance of one-in-a-hundred fits with our base ten numbering system, and how it’s been about a hundred years in operation. So any particular board would have about a 99 percent chance of making it through any particular year. If we suppose that the chance of a board making it through the year is independent — it doesn’t change with the board’s age, or the condition of neighboring boards, or anything but the fact that a year has passed — then the chance of any particular board lasting a hundred years is going to be 0.99^{100} . That takes a little thought to work out if you haven’t got a calculator on hand.

Continue reading “Why I Say 1/e About This Roller Coaster”

Just Answer 1/e Whenever Anyone Asks This Kind Of Question


I recently had the chance to ride the Leap-the-Dips at Lakemont Park (Altoona, Pennsylvania), the world’s oldest operating roller coaster. The statistics of this 1902-vintage roller coaster might not sound impressive, as it has a maximum height of about forty feet and a greatest drop of about nine feet, but it gets rather more exciting when you consider that the roller coaster car hasn’t got any seat belts or lap bar or other restraints (just a bar you can grab onto if you so choose), and that the ride was built before the invention of upstop wheels, the wheels that actually go underneath the track and keep roller coaster cars from jumping off. At each of the dips, yes, the car does jump up and off the track, and the car just keeps accelerating the whole ride. (Side boards ensure that once the car jumps off the tracks it falls back into place.) It’s worth the visit.

Looking at the wonderful mesh of wood that makes up a classic roller coaster like this inspired the question: could any of it be original? What’s the chance that any board in it has lasted the hundred-plus years of the roller coaster’s life (including a twelve-year stretch when the ride was not running, a state which usually means routine maintenance is being skipped and which just destroys amusement park rides)? Taking some reasonable guesses about the replacement rate per year, and a quite unreasonable guess about replacement procedure, I worked out my guess, given in the subject line above, and I figure to come back and explain where that all came from.

My July 2013 Statistics


As I’ve started keeping track of my blog statistics here where it’s all public information, let me continue.

WordPress says that in July 2013 I had 341 pages read, which is down rather catastrophically from the June score of 713. The number of distinct visitors also dropped, though less alarmingly, from 246 down to 156; this also implies the number of pages each visitor viewed dropped from 2.90 down to 2.19. That’s still the second-highest number of pages-per-visitor that I’ve had recorded since WordPress started sharing that information with me, so, I’m going to suppose that the combination of school letting out (so fewer people are looking for help about trapezoids) and my relatively fewer posts this month hit me. There are presently 215 people following the blog, if my Twitter followers are counted among them. They hear about new posts, anyway.

My most popular posts over the past 30 days have been:

  1. John Dee, the ‘Mathematicall Praeface’ and the English School of Mathematics, which is primarily a pointer to the excellent mathematics history blog The Renaissance Mathematicus, and about the really quite fascinating Doctor John Dee, advisor to England’s Queen Elizabeth I.
  2. Counting From 52 To 11,108, some further work from Professor Inder J Taneja on a lovely bit of recreational mathematics. (Professor Taneja even pops in for the comments.)
  3. Geometry The Old-Fashioned Way, pointing to a fun little web page in which you can work out geometric constructions using straightedge and compass live and direct on the web.
  4. Reading the Comics, July 5, 2013, and finally; I was wondering if people actually still liked these posts.
  5. On Exact And Inexact Differentials, another “reblog” style pointer, this time to Carnot Cycle, a thermodynamics-oriented blog.
  6. And The $64 Question Was, in which I learned something about a classic game show and started to think about how it might be used educationally.

My all-time most popular post remains How Many Trapezoids I Can Draw, because I think there are people out there who worry about how many different kinds of trapezoids there are. I hope I can bring a little peace to their minds. (I make the answer out at six.)

The countries sending me the most viewers the past month have been the United States (165), then Denmark (32), Australia (24), India (18), and the United Kingdom and Brazil (12 each). Sorry, Canada (11). Sending me a single viewer each were Estonia, Slovenia, South Africa, the Netherlands, Argentina, Pakistan, Angola, France, and Switzerland. Argentina and Slovenia did the same for me last month too.

John Dee, the ‘Mathematicall Praeface’ and the English School of Mathematics.


The 13th of July was the birth date for John Dee, one of those historical figures who seems ready-made for dopey historical thrillers as he combined the religious disputes of his time — the reigns of Queen Mary I and of Queen Elizabeth I — with astrology and astronomy and mathematics and possibly espionage, with navigation and the early days of England’s expansion to a world power; he even gets into such fascinating-to-the-fans issues like calendar reform.

The Renaissance Mathematician does him some justice by here writing a biographical sketch that focuses on, first, what he can actually be shown to have done (as opposed to the many and really too-far-reaching conspiracies that can touch on him), particularly in turning England from a mathematics desert to a place where people like Isaac Newton, John Wallis (you know his work in the &infty; symbol), or William Oughtred (of slide rule fame, as well as a pioneer in using “x” to symbolize multiplication) could thrive.

The Renaissance Mathematicus

I have written about John Dee several times in the past but always in reaction to someone making stupid statements about him so I thought that today on his birthday, he was born 13 June July 1527, I would write something positive without prior provocation.

John_Dee_Ashmolean

The world into which Dee was born was one in which mathematics did not play a very significant role and this was particularly true of England, which in this sense lagged severely behind the continent. During the High Middle Ages the European universities virtually ignored mathematics although the introductory degree was theoretically based on the seven liberal arts including the quadrivium consisting of arithmetic, geometry, music (theory of proportions) and astronomy. These subjects were only treated in a very superficial manner and there were no dedicated chairs for the study of mathematics.

This situation began to change in the fifteenth century at the humanist universities…

View original post 1,213 more words

Roger Cotes’s Birthday


Amongst the Twitter feeds I follow and which aren’t based on fictional squirrels is the @mathshistory one, reporting just what it sounds like. It noted the 10th of July was the birthday of Roger Cotes (1682 – 1716) and I knew there was something naggingly familiar about his name. His biography at the MacTutor History of Mathematics Archive features what surely kept him in my mind: that on Cotes’s death at age 34 Isaac Newton said, “… if he had lived we might have known something”. Given Newton’s standing, that’s a eulogy almost good enough to get Cotes tenure, even today.

MacTutor credits Cotes with, among other things, inventing the radian measure of angles; I’m wise enough, I hope, to view skeptically all claims of anyone uniquely inventing anything mathematical, although it’s certainly so that radian measure — in which you give an angle of arc, not by how many degrees it reaches, but by how long the arc is, in units of the radius — is extraordinarily convenient analytically and it’s hard to see how mathematicians did without it. People who advanced the idea and its use deserve their praise. (Normal people can carry on with degrees of arc, for which the numbers are just more pleasant.) As a bonus it serves as one of the points on which people coming into trigonometry classes can feel their heads exploding.

Cotes’s name also gets a decent and, if I have it right, appropriate amount of fame for what are called Newton-Cotes formulas. These are methods for “numerical quadrature”, the slightly old-fashioned name we use to talk about numerical approximations of integrals. In an introductory calculus class one’s likely to run across a couple of rules for numerical quadrature — given names like the Trapezoid Rule, Simpson’s Rule, Simpson’s 3/8ths rule, or the Midpoint Rule — and these are all examples of the Newton-Cotes formulas. Teaching the routine for getting all these Newton-Cotes formulas was, for whatever reason, one of the things I found particularly delightful when I taught numerical mathematics; some subjects are just fun to explain.

MacTutor also notes that from 1709 through 1713, Cotes edited the second edition of Newton’s Principia, and apparently did a most thorough job of it. It claims he studied the Principia and arguing its points with Newton in enough detail that Newton finally removed the thanks he gave to Cotes in the first draft of his preface. A difficult but correct editor is probably more pleasant to have when the project is finished.

Where Do Negative Numbers Come From?


Some time ago — and I forget when, I’m embarrassed to say, and can’t seem to find it because the search tool doesn’t work on comments — I was asked about how negative numbers got to be accepted. That’s a great question, particularly since while it seems like the idea of positive numbers is probably lost in prehistory, negative numbers definitely progressed in the past thousand years or so from something people might wildly speculate about to being a reasonably comfortable part of daily mathematics.

While searching for background information I ran across a doctoral thesis, Making Sense Of Negative Numbers, which is uncredited in the PDF I just linked to but appears to be by Dr Cecilia Kilhamn, of the University of Gothenburg, Sweden. Dr Kilhamn’s particular interest (here) is in how people learn to use negative numbers, so most of the thesis is about the conceptual difficulties people have when facing the minus sign (not least because it serves two roles, of marking a number as negative and of marking the subtraction operation), but the first chapters describe the historical process of developing the concept of negative numbers.

Continue reading “Where Do Negative Numbers Come From?”

From the Venetian Quarter


In 1204 the Fourth Crusade, reaching the peak of its mission to undermine Christianity in the eastern Mediterranean, sacked Constantinople and established a Latin ruler in the remains of the Roman Empire, which we dub today the Byzantine Empire. This I mention because I’m reading John Julius Norwich’s A History of Venice, and it discusses one of the consequences. Venice had supported the expedition, in no small part to divert the Fourth Crusaders from attacking its trading partners in Egypt, and also to reduce Constantinople as a threat to Venice’s power. Venice got direct material rewards too, and Norwich mentions one of them:

When, on 5 August 1205, Sebastiano Ziani’s son Pietro was unanimously elected Doge of Venice, the first question that confronted him was one of identity. To the long list of sonorous but mostly empty titles which had gradually become attached to the ducal throne, there had now been added a new one which meant exactly what it said: Lord of a Quarter and Half a Quarter of the Roman Empire.

This I mention because the reward of three-eighths of the Byzantine Empire (the Byzantines considered themselves the Roman Empire, quite reasonably, and called themselves that) is phrased here in a way that just wouldn’t be said today. Why go to the circumlocution of “a quarter and half a quarter” instead of “three-eights”?

Continue reading “From the Venetian Quarter”

How Big Was West Jersey?


A map of New Jersey's counties, with municipal boundaries added.

A book I’d read about the history of New Jersey mentioned something usable for a real-world-based problem in fraction manipulation, for a class which was trying to get students back up to speed on arithmetic on their way into algebra. It required some setup to be usable, though. The point is a property sale from the 17th century, from George Hutcheson to Anthony Woodhouse, transferring “1/32 of 3/90 of 90/100 shares” of land in the province of West Jersey. There were a hundred shares in the province, so, the natural question to build is: how much land was transferred?

The obvious question, to people who failed to pay attention to John T Cunningham’s This Is New Jersey in fourth grade, or who spent fourth grade not in New Jersey, or who didn’t encounter that one Isaac Asimov puzzle mystery (I won’t say which lest it spoil you), is: what’s West Jersey? That takes some historical context.

Continue reading “How Big Was West Jersey?”

%d bloggers like this: