If you’ve been following me on Twitter you’ve seen reports of the Great Migration. This is the pompous name I give to the process of bringing the goldfish who were in tanks in the basement for the winter back outside again. This to let them enjoy the benefits of the summer, like, not having me poking around testing their water every day. (We had a winter with a lot of water quality problems. I’m probably over-testing.)

The Great Migration finally: four goldfish brought outside today. 12 remain in the left tank, 14 in the right, I think.

My reports about moving them back — by setting a net in that could trap some fish and moving them out — included reports of how many remained in each tank. And many people told me how such updates as “Twelve goldfish are in the left tank, three in the right, and fifteen have been brought outside” sound like the start of a story problem. Maybe it does. I don’t have a particular story problem built on this. I’m happy to take nominations for such.

But I did have some mathematics essays based on the problem of moving goldfish to the pond outdoors and to the warm water tank indoors:

How To Count Fish, about how one could estimate a population by sampling it twice.

How To Re-Count Fish, about one of the practical problems in using this to count as few goldfish as we have at our household.

How Not To Count Fish, about how this population estimate wouldn’t work because of the peculiarities of goldfish psychology. Honest.

That I spend one essay describing how to do a thing, and then two more essays describing why it won’t work, may seem characteristically me. Well, yeah. Mathematics is a great tool. To use a tool safely requires understanding its powers and its limitations. I like thinking about what mathematics can and can’t do.

So this past week saw a lot of comic strips with some mathematical connection put forth. There were enough just for the 26th that I probably could have done an essay with exclusively those comics. So it’s another split-week edition, which suits me fine as I need to balance some of my writing loads the next couple weeks for convenience (mine).

Tony Cochrane’s Agnes for the 25th of June is fun as the comic strip almost always is. And it’s even about estimation, one of the things mathematicians do way more than non-mathematicians expect. Mathematics has a reputation for precision, when in my experience it’s much more about understanding and controlling error methods. Even in analysis, the study of why calculus works, the typical proof amounts to showing that the difference between what you want to prove and what you can prove is smaller than your tolerance for an error. So: how do we go about estimating something difficult, like, the number of stars? If it’s true that nobody really knows, how do we know there are some wrong answers? And the underlying answer is that we always know some things, and those let us rule out answers that are obviously low or obviously high. We can make progress.

Russell Myers’s Broom Hilda for the 25th is about one explanation given for why time keeps seeming to pass faster as one age. This is a mathematical explanation, built on the idea that the same linear unit of time is a greater proportion of a young person’s lifestyle so of course it seems to take longer. This is probably partly true. Most of our senses work by a sense of proportion: it’s easy to tell a one-kilogram from a two-kilogram weight by holding them, and easy to tell a five-kilogram from a ten-kilogram weight, but harder to tell a five from a six-kilogram weight.

As ever, though, I’m skeptical that anything really is that simple. My biggest doubt is that it seems to me time flies when we haven’t got stories to tell about our days, when they’re all more or less the same. When we’re doing new or exciting or unusual things we remember more of the days and more about the days. A kid has an easy time finding new things, and exciting or unusual things. Broom Hilda, at something like 1500-plus years old and really a dour, unsociable person, doesn’t do so much that isn’t just like she’s done before. Wouldn’t that be an influence? And I doubt that’s a complete explanation either. Real things are more complicated than that yet.

Mac and Bill King’s Magic In A Minute for the 25th features a form-a-square puzzle using some triangles. Mathematics? Well, logic anyway. Also a good reminder about open-mindedness when you’re attempting to construct something.

Norm Feuti’s Retail for the 26th is about how you get good at arithmetic. I suspect there’s two natural paths; you either find it really interesting in your own right, or you do it often enough you want to find ways to do it quicker. Marla shows the signs of learning to do arithmetic quickly because she does it a lot: turning “30 percent off” into “subtract ten percent three times over” is definitely the easy way to go. The alternative is multiplying by seven and dividing by ten and you don’t want to multiply by seven unless the problem gives a good reason why you should. And I certainly don’t fault the customer not knowing offhand what 30 percent off $25 would be. Why would she be in practice doing this sort of problem?

Johnny Hart’s Back To B.C. for the 26th reruns the comic from the 30th of December, 1959. In it … uh … one of the cavemen guys has found his calendar for the next year has too many days. (Think about what 1960 was.) It’s a common problem. Every calendar people have developed has too few or too many days, as the Earth’s daily rotations on its axis and annual revolution around the sun aren’t perfectly synchronized. We handle this in many different ways. Some calendars worry little about tracking solar time and just follow the moon. Some calendars would run deliberately short and leave a little stretch of un-named time before the new year started; the ancient Roman calendar, before the addition of February and January, is famous in calendar-enthusiast circles for this. We’ve now settled on a calendar which will let the nominal seasons and the actual seasons drift out of synch slowly enough that periodic changes in the Earth’s orbit will dominate the problem before the error between actual-year and calendar-year length will matter. That’s a pretty good sort of error control.

8,978,432 is not anywhere near the number of days that would be taken between 4,000 BC and the present day. It’s not a joke about Bishop Ussher’s famous research into the time it would take to fit all the Biblically recorded events into history. The time is something like 24,600 years ago, a choice which intrigues me. It would make fair sense to declare, what the heck, they lived 25,000 years ago and use that as the nominal date for the comic strip. 24,600 is a weird number of years. Since it doesn’t seem to be meaningful I suppose Hart went, simply enough, with a number that was funny just for being riotously large.

Mark Tatulli’s Heart of the City for the 26th places itself on my Grand Avenue warning board. There’s plenty of time for things to go a different way but right now it’s set up for a toxic little presentation of mathematics. Heart, after being grounded, was caught sneaking out to a slumber party and now her mother is sending her to two weeks of Math Camp. I’m supposing, from Tatulli’s general attitude about how stuff happens in Heart and in Lio that Math Camp will not be a horrible, penal experience. But it’s still ominous talk and I’m watching.

Brian Fies’s Mom’s Cancer story for the 26th is part of the strip’s rerun on GoComics. (Many comic strips that have ended their run go into eternal loops on GoComics.) This is one of the strips with mathematical content. The spatial dimension of a thing implies relationships between the volume (area, hypervolume, whatever) of a thing and its characteristic linear measure, its diameter or radius or side length. It can be disappointing.

Nicholas Gurewitch’s Perry Bible Fellowship for the 26th is a repeat of one I get on my mathematics Twitter friends now and then. Should warn, it’s kind of racy content, at least as far as my usual recommendations here go. It’s also a little baffling because while the reveal of the unclad woman is funny … what, exactly, does it mean? The symbols don’t mean anything; they’re just what fits graphically. I think the strip is getting at Dr Loring not being able to see even a woman presenting herself for sex as anything but mathematics. I guess that’s funny, but it seems like the idea isn’t quite fully developed.

Zach Weinersmith’s Saturday Morning Breakfast Cereal Again for the 26th has a mathematician snort about plotting a giraffe logarithmically. This is all about representations of figures. When we plot something we usually start with a linear graph: a couple of axes perpendicular to one another. A unit of movement in the direction of any of those axes represents a constant difference in whatever that axis measures. Something growing ten units larger, say. That’s fine for many purposes. But we may want to measure something that changes by a power law, or that grows (or shrinks) exponentially. Or something that has some region where it’s small and some region where it’s huge. Then we might switch to a logarithmic plot. Here the same difference in space along the axis represents a change that’s constant in proportion: something growing ten times as large, say. The effective result is to squash a shape down, making the higher points more nearly flat.

And to completely smother Weinersmith’s fine enough joke: I would call that plot semilogarithmically. I’d use a linear scale for the horizontal axis, the gazelle or giraffe head-to-tail. But I’d use a logarithmic scale for the vertical axis, ears-to-hooves. So, linear in one direction, logarithmic in the other. I’d be more inclined to use “logarithmic” plots to mean logarithms in both the horizontal and the vertical axes. Those are useful plots for turning up power laws, like the relationship between a planet’s orbital radius and the length of its year. Relationships like that turn into straight lines when both axes are logarithmically spaced. But I might also describe that as a “log-log plot” in the hopes of avoiding confusion.

And now to wrap up last week’s mathematically-themed comic strips. It’s not a set that let me get into any really deep topics however hard I tried overthinking it. Maybe something will turn up for Sunday.

Mason Mastroianni, Mick Mastroianni, and Perri Hart’s B.C. for the 7th tries setting arithmetic versus celebrity trivia. It’s for the old joke about what everyone should know versus what everyone does know. One might question whether Kardashian pet eating habits are actually things everyone knows. But the joke needs some hyperbole in it to have any vitality and that’s the only available spot for it. It’s easy also to rate stuff like arithmetic as trivia since, you know, calculators. But it is worth knowing that seven squared is pretty close to 50. It comes up when you do a lot of estimates of calculations in your head. The square root of 10 is pretty near 3. The square root of 50 is near 7. The cube root of 10 is a little more than 2. The cube root of 50 a little more than three and a half. The cube root of 100 is a little more than four and a half. When you see ways to rewrite a calculation in estimates like this, suddenly, a lot of amazing tricks become possible.

Leigh Rubin’s Rubes for the 7th is a “mathematics in the real world” joke. It could be done with any mythological animals, although I suppose unicorns have the advantage of being relatively easy to draw recognizably. Mermaids would do well too. Dragons would also read well, but they’re more complicated to draw.

Mark Pett’s Mr Lowe rerun for the 8th has the kid resisting the mathematics book. Quentin’s grounds are that how can he know a dated book is still relevant. There’s truth to Quentin’s excuse. A mathematical truth may be universal. Whether we find it interesting is a matter of culture and even fashion. There are many ways to present any fact, and the question of why we want to know this fact has as many potential answers as it has people pondering the question.

Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 8th is a paean to one of the joys of numbers. There is something wonderful in counting, in measuring, in tracking. I suspect it’s nearly universal. We see it reflected in people passing around, say, the number of rivets used in the Chrysler Building or how long a person’s nervous system would reach if stretched out into a line or ever-more-fanciful measures of stuff. Is it properly mathematics? It’s delightful, isn’t that enough?

Scott Hilburn’s The Argyle Sweater for the 10th is a Fibonacci Sequence joke. That’s a good one for taping to the walls of a mathematics teacher’s office.

Bill Rechin’s Crock rerun for the 11th is a name-drop of mathematics. Really anybody’s homework would be sufficiently boring for the joke. But I suppose mathematics adds the connotation that whatever you’re working on hasn’t got a human story behind it, the way English or History might, and that it hasn’t got the potential to eat, explode, or knock a steel ball into you the way Biology, Chemistry, or Physics have. Fair enough.

A couple of weeks after that — on Thanksgiving, it happens — we caught one more fish. This brought the total to 54. And I either failed to make note of it or I can’t find the note I made of it. Such happens.

In getting the pond ready for the spring, and the return of our goldfish to the outdoors, we found another one! It was just this orange thing dug into the muck of the pool, and we thought initially it was something that had fallen in and gotten lost. A heron scarer, was my love’s first guess. The pond thermometer that sank without trace some years back was mine. I used the grabber to poke at it and woke up a pretty sulky goldfish. It went over to some algae where we couldn’t so easily bother it.

So that brings our fish count to 55, for those keeping track. Fortunately, it was a very gentle winter in our parts. We’re hoping to bring the goldfish back out to the pond in the next week or two. Our best estimate for the carrying capacity of the pond is 65 to 130 goldfish, so, we will see whether the goldfish do anything about this slight underpopulation.

Folks who’ve been around a while may remember the matter of our fish. I’d spent some time in the spring describing ways to estimate a population using techniques other than just counting everybody. And then revealed that the population of goldfish in our pond was something like 53, based on counting the fifty which we’d had wintering over in our basement and the three we counted in the pond despite the winter ice. This is known as determining the population “by inspection”.

I’m disappointed to say that, as best we can work out, they didn’t get around to producing any new goldfish this year. We didn’t see any evidence of babies, and haven’t seen any noticeably small ones swimming around. It’s possible we set them out too late in the spring. It’s possible too that the summer was never quite warm enough for them to feel like it was fish-production time.

This does mean that we have a reasonably firm upper limit on the number of fish we need to take in. 53 appears to be it. And the winter’s been settling in, though, and we’ve started taking them in. This past day we took in twelve. That’s not bad for the first harvest and if we’re lucky we should have the pond emptied in a week or so. I’ll let folks know if there turn out to be a surprise in goldfish cardinality.

This is one of my A to Z words that everyone knows. An error is some mistake, evidence of our human failings, to be minimized at all costs. That’s … well, it’s an attitude that doesn’t let you use error as a tool.

An error is the difference between what we would like to know and what we do know. Usually, what we would like to know is something hard to work out. Sometimes it requires complicated work. Sometimes it requires an infinite amount of work to get exactly right. Who has the time for that?

This is how we use errors. We look for methods that approximate the thing we want, and that estimate how much of an error that method makes. Usually, the method involves doing some basic step some large number of times. And usually, if we did the step more times, the estimate of the error we make will be smaller. My essay “Calculating Pi Less Terribly” shows an example of this. If we add together more terms from that Leibniz formula we get a running total that’s closer to the actual value of π.

Back on “Pi Day” I shared a terrible way of calculating the digits of π. It’s neat in principle, yes. Drop a needle randomly on a uniformly lined surface. Keep track of how often the needle crosses over a line. From this you can work out the numerical value of π. But it’s a terrible method. To be sure that π is about 3.14, rather than 3.12 or 3.38, you can expect to need to do over three and a third million needle-drops. So I described this as a terrible way to calculate π.

A friend on Twitter asked if it was worse than adding up 4 * (1 – 1/3 + 1/5 – 1/7 + … ). It’s a good question. The answer is yes, it’s far worse than that. But I want to talk about working π out that way.

When I worked out how interesting, in an information-theory sense, a basketball game — and from that, a tournament — might be, I supposed there was only one thing that might be interesting about the game: who won? Or to be exact, “did (this team) win”? But that isn’t everything we might want to know about a game. For example, we might want to know what a team scored. People often do. So how to measure this?

The answer was given, in embryo, in my first piece about how interesting a game might be. If you can list all the possible outcomes of something that has multiple outcomes, and how probable each of those outcomes is, then you can describe how much information there is in knowing the result. It’s the sum, for all of the possible results, of the quantity negative one times the probability of the result times the logarithm-base-two of the probability of the result. When we were interested in only whether a team won or lost there were just the two outcomes possible, which made for some fairly simple calculations, and indicates that the information content of a game can be as high as 1 — if the team is equally likely to win or to lose — or as low as 0 — if the team is sure to win, or sure to lose. And the units of this measure are bits, the same kind of thing we use to measure (in groups of bits called bytes) how big a computer file is.

When I wrote about how interesting the results of a basketball tournament were, and came to the conclusion that it was 63 (and filled in that I meant 63 bits of information), I was careful to say that the outcome of a basketball game between two evenly-matched opponents has an information content of 1 bit. If the game is a foregone conclusion, then the game hasn’t got so much information about it. If the game really is foregone, the information content is 0 bits; you already know what the result will be. If the game is an almost sure thing, there’s very little information to be had from actually seeing the game. An upset might be thrilling to watch, but you would hardly count on that, if you’re being rational. But most games aren’t sure things; we might expect the higher-seed to win, but it’s plausible they don’t. How does that affect how much information there is in the results of a tournament?

Last year, the NCAA College Men’s Basketball tournament inspired me to look up what the outcomes of various types of matches were, and which teams were more likely to win than others. If some person who wrote something for statistics.about.com is correct, based on 27 years of March Madness outcomes, the play between a number one and a number 16 seed is a foregone conclusion — the number one seed always wins — while number two versus number 15 is nearly sure. So while the first round of play will involve 32 games — four regions, each region having eight games — there’ll be something less than 32 bits of information in all these games, since many of them are so predictable.

So if the eight contests in a single region were all evenly matched, the information content of that region would be 8 bits. But there’s one sure and one nearly-sure game in there, and there’s only a couple games where the two teams are close to evenly matched. As a result, I make out the information content of a single region to be about 5.392 bits of information. Since there’s four regions, that means the first round of play — the first 32 games — have altogether about 21.567 bits of information.

Warning: I used three digits past the decimal point just because three is a nice comfortable number. Do not by hypnotized into thinking this is a more precise measure than it really is. I don’t know what the precise chance of, say, a number three seed beating a number fourteen seed is; all I know is that in a 27-year sample, it happened the higher-seed won 85 percent of the time, so the chance of the higher-seed winning is probably close to 85 percent. And I only know that if whoever it was wrote this article actually gathered and processed and reported the information correctly. I would not be at all surprised if the first round turned out to have only 21.565 bits of information, or as many as 21.568.

A statistical analysis of the tournaments which I dug up last year indicated that in the last three rounds — the Elite Eight, Final Four, and championship game — the higher- and lower-seeded teams are equally likely to win, and therefore those games have an information content of 1 bit per game. The last three rounds therefore have 7 bits of information total.

Unfortunately, experimental data seems to fall short for the second round — 16 games, where the 32 winners in the first round play, producing the Sweet Sixteen teams — and the third round — 8 games, producing the Elite Eight. If someone’s done a study of how often the higher-seeded team wins I haven’t run across it.

There are six of these games in each of the four regions, for 24 games total. Presumably the higher-seeded is more likely than the lower-seeded to win, but I don’t know how much more probable it is the higher-seed will win. I can come up with some bounds: the 24 games total in the second and third rounds can’t have an information content less than 0 bits, since they’re not all foregone conclusions. The higher-ranked seed won’t win all the time. And they can’t have an information content of more than 24 bits, since that’s how much there would be if the games were perfectly even matches.

So, then: the first round carries about 21.567 bits of information. The second and third rounds carry between 0 and 24 bits. The fourth through sixth rounds (the sixth round is the championship game) carry seven bits. Overall, the 63 games of the tournament carry between 28.567 and 52.567 bits of information. I would expect that many of the second-round and most of the third-round games are pretty close to even matches, so I would expect the higher end of that range to be closer to the true information content.

Let me make the assumption that in this second and third round the higher-seed has roughly a chance of 75 percent of beating the lower seed. That’s a number taken pretty arbitrarily as one that sounds like a plausible but not excessive advantage the higher-seeded teams might have. (It happens it’s close to the average you get of the higher-seed beating the lower-seed in the first round of play, something that I took as confirming my intuition about a plausible advantage the higher seed has.) If, in the second and third rounds, the higher-seed wins 75 percent of the time and the lower-seed 25 percent, then the outcome of each game is about 0.8113 bits of information. Since there are 24 games total in the second and third rounds, that suggests the second and third rounds carry about 19.471 bits of information.

Warning: Again, I went to three digits past the decimal just because three digits looks nice. Given that I do not actually know the chance a higher-seed beats a lower-seed in these rounds, and that I just made up a number that seems plausible you should not be surprised if the actual information content turns out to be 19.468 or even 19.472 bits of information.

Taking all these numbers, though — the first round with its something like 21.567 bits of information; the second and third rounds with something like 19.471 bits; the fourth through sixth rounds with 7 bits — the conclusion is that the win/loss results of the entire 63-game tournament are about 48 bits of information. It’s a bit higher the more unpredictable the games involving the final 32 and the Sweet 16 are; it’s a bit lower the more foregone those conclusions are. But 48 bits sounds like a plausible enough answer to me.

I’d discussed a probability/sampling-based method to estimate the number of fish that might be in our pond out back, and then some of the errors that have to be handled if you want to have a reliable result. Now, I want to get into why the method doesn’t work, at least not without much greater insight into goldfish behavior than simply catching a couple and releasing them will do.

Catching a sample, re-releasing it, and counting how many of that sample we re-catch later on is a logically valid method, provided certain assumptions the method requires are accurately — or at least accurately enough — close to the way the actual thing works. Here are some of the ways goldfish fall short of the ideal.

First faulty assumption: Goldfish are perfectly identical. In this goldfish-trapped we make the assumption that there is some, fixed, constant probability of a goldfish being caught in the net. We have to assume that this is the same number for every goldfish, and that it doesn’t change as goldfish go through the experience of getting caught and then released. But goldfish have personality, as you learn if you have a bunch in a nice setting and do things like try feeding them koi treats or introduce something new like a wire-mesh trap to their environment. Some are adventurous and will explore the unfamiliar thing; some are shy and will let everyone else go first and then maybe not bother going at all. I empathize with both positions.

If there are enough goldfish, the variation between personalities is probably not going to matter much. There’ll be some that are easy to catch, and they’ll probably be roughly as common as the ones who can’t be coaxed into the trap at all. It won’t be exactly balanced unless we’re very lucky, but this would probably only throw off our calculations a little bit.

Whether the goldfish learn, and become more, or less, likely to be trapped in time is harder. Goldfish do learn, certainly, although it’s not obvious to me that the trapping and releasing experience would be one they draw much of a lesson from. It’s only a little inconvenience, really, and not at all harmful; what should they learn? Other than that there’s maybe an easy bit of food to be had here so why not go in? So this might change their behavior and it’s hard to predict how.

(I note that animal capture studies get quite frustrated when the animals start working out how to game the folks studying them. Bil Gilbert’s early-70s study of coatis — Latin American raccoons, written up in the lovely popularization Chulo: A Year Among The Coatimundis — was plagued by some coatis who figured out going into the trap was an easy, safe meal they’d be released from without harm, and wouldn’t go back about their business and leave room for other specimens.)

Second faulty assumption: Goldfish are not perfectly identical. This is the biggest challenge to counting goldfish population by re-catching a sample of them. How do you know if you caught a goldfish before? When they grow to adulthood, it’s not so bad, since they grow fairly distinctive patterns of orange and white and black and such, and they’ll usually settle into different sizes. (That said, we do have two adult fish who were very distinct when we first got them, but who’ve grown into near-twins.)

But baby goldfish? They’re basically all tiny black things, meant to hide into the mud at the bottom of ponds and rivers — their preferred habitat — and pretty near indistinguishable. As they get larger they get distinguishable, a bit, and start to grow patterns, but for the vast number of baby fish there’s just no telling one from another.

When we were trying to work out whether some mice we found in the house were ones we had previously caught and put out in the garage, we were able to mark them by squiring some food dye at their heads as they were released. The mice would rub the food dye from their heads onto their whole bodies and it would take a while before the dye would completely fade out. (We didn’t re-catch any mice, although it’s hard to dye a wild mouse efficiently because they will take off like bullets. Also one time when we thought we’d captured one there were actually three in the humane trap and you try squiring the food dye bottle at two more mice than you thought were there, fleeing.) But you can see how the food dye wouldn’t work here. Animal researchers with a budget might go on to attach collars or somehow otherwise mark animals, but if there’s a way to mark and track goldfish with ordinary household items I can’t think of it.

(No, we will not be taking the bits of americium in our smoke detectors out and injecting them into trapped goldfish; among the objections, I don’t have a radioactivity detector.)

Third faulty assumption: Goldfish are independent entities. The first two faulty assumptions are ones that could be kind of worked around. If there’s enough goldfish then the distribution of how likely any one is to get caught will probably be near enough normal that we can pretend there’s an identical chance of catching each, and if we really thought about it we could probably find some way of marking goldfish to tell if we re-caught any. Independence, though; this is the point on which so many probability-based schemes fall.

Independence, in the language of probability, is the principle that one thing’s happening does not affect the likelihood of another thing happening. For our problem, it’s the assumption that one goldfish being caught does not make it any more or less likely that another goldfish will be caught. We like independence, in studying probability. It makes so many problems easier to study, or even possible to study, and it often seems like a reasonable supposition.

A good number of interesting scientific discoveries amount to finding evidence that two things are not actually independent, and that one thing happening makes it more (or less) likely the other will. Sometimes these turn out to be vapor — there was a 19th-century notion suggesting a link between sunspot activity and economic depressions (because sunspots correlate to solar activity, which could affect agriculture, and up to 1893 the economy and agriculture were pretty much the same thing) — but when there is a link the results can be profound, as see the smoking-and-cancer link, or for something promising but still (to my understanding) under debate, the link between leaded gasoline and crime rates.

How this applies to the goldfish population problem, though, is that goldfish are social creatures. They school, loosely, forming and re-forming groups, and would much rather be around another goldfish than not. Even as babies they form these adorable tiny little schools; that may be in the hopes that someone else will get eaten by a bigger fish, but they keep hanging around other fish their own size through their whole lives. If there’s a goldfish inside the trap, it is hard to believe that other goldfish are not going to follow it just to be with the company.

Indeed, the first day we set out the trap for the winter, we pulled in all but one of the adult fish, all of whom apparently followed the others into the enclosure. I’m sorry I couldn’t photograph that because it was both adorable and funny to see so many fish just station-keeping beside one another — they were even all looking in the same direction — and waiting for whatever might happen next. Throughout the months we were able to spend bringing in fish, the best bait we could find was to have one fish already in the trap, and a couple days we did leave one fish in a few more hours or another night so that it would be joined by several companions the next time we checked.

So that’s something which foils the catch and re-catch scheme: goldfish are not independent entities. They’re happy to follow one another into trap. I would think the catch and re-catch scheme should be salvageable, if it were adapted to the way goldfish actually behave. But that requires a mathematician admitting that he can’t just blunder into a field with an obvious, simple scheme to solve a problem, and instead requires the specialized knowledge and experience of people who are experts in the field, and that of course can’t be done. (For example, I don’t actually know that goldfish behavior is sufficiently non-independent as to make an important difference in a population estimate of this kind. But someone who knew goldfish or carp well could tell me, or tell me how to find out.)

For those curious how the goldfish worked out, though, we were able to spend about two and a half months catching fish before the pond froze over for the winter, though the number we caught each week dropped off as the temperature dropped. We have them floating about in a stock tank in the basement, waiting for the coming of spring and the time the pond will be warm enough for them to re-occupy it. We also know that at least some of the goldfish we didn’t catch made it to, well, about a month ago. I’d seen one of the five orange baby fish who refused to go into the trap through a hole in the ice then. It was holding close to the bottom but seemed to be in good shape.

This coming year should be an exciting one for our fish population.

Last week I chatted a bit with a probabilistic, sampling-based method to estimate the population of fish in our backyard pond. The method estimates the population of a thing, in this case the fish, by capturing a sample of size and dividing that by the probability of catching one of the things in your sampling. Since we might know know the chance of catching the thing beforehand, we estimate it: catch some number of the fish or whatever, then put them back, and then re-catch as many. Some number of those will be re-caught, so we can estimate the chance of catching one fish as . So the original population will be somewhere about .

I want to talk a little bit about why that won’t work.

There is of course the obvious reason to think this will go wrong; it amounts to exactly the same reason why a baseball player with a .250 batting average — meaning the player can expect to get a hit in one out of every four at-bats — might go an entire game without getting on base, or might get on base three times in four at-bats. If something has chances to happen, and it has a probability of happening at every chance, it’s most likely that it will happen times, but it can happen more or fewer times than that. Indeed, we’d get a little suspicious if it happened exactly times. If we flipped a fair coin twenty times, it’s most likely to come up tails ten times, but there’s nothing odd about it coming up tails only eight or as many as fourteen times, and it’d stand out if it always came up tails exactly ten times.

To apply this to the fish problem: suppose that there are fish in the pond; that 50 is the number we want to get. And suppose we know for a fact that every fish has a 12.5 percent chance — — of being caught in our trap. Ignore for right now how we know that probability; just pretend we can count on that being exactly true. The expectation value, the most probable number of fish to catch in any attempt, is fish, which presents our first obvious problem. Well, maybe a fish might be wriggling around the edge of the net and fall out as we pull the trap out. (This actually happened as I was pulling some of the baby fish in for the winter.)

With these numbers it’s most probable to catch six fish, slightly less probable to catch seven fish, less probable yet to catch five, then eight and so on. But these are all tolerably plausible numbers. I used a mathematics package (Octave, an open-source clone of Matlab) to run ten simulated catches, from fifty fish each with a probability of .125 of being caught, and came out with these sizes for the fish harvests:

M =

4

6

3

6

7

7

5

7

8

9

Since we know, by some method, that the chance of catching any one fish is exactly 0.125, this implies fish populations of:

M =

4

6

3

6

7

7

5

7

8

9

N =

32

48

24

48

56

56

40

56

64

72

Now, none of these is the right number, although 48 is respectably close and 56 isn’t too bad. But the range is hilarious: there might be as few as 24 or as many as 72 fish, based on just this evidence. That might as well be guessing.

This is essentially a matter of error analysis. Any one attempt at catching fish may be faulty, because the fish are too shy of the trap, or too eager to leap into it, or are just being difficult for some reason. But we can correct for the flaws of one attempt at fish-counting by repeating the experiment. We can’t always be unlucky in the same ways.

This is conceptually easy, and extremely easy to do on the computer; it’s a little harder in real life but certainly within the bounds of our research budget, since I just have to go out back and put the trap out. And redoing the experiment even pays off, too: average those population samples from the ten simulated runs there and we get a mean estimated fish population of 49.6, which is basically dead on.

(That was lucky, I must admit. Ten attempts isn’t really enough to make the variation comfortably small. Another run with ten simulated catchings produced a mean estimate population of 56; the next one … well, 49.6 again, but the one after that gave me 64. It isn’t until we get into a couple dozen attempts that the mean population estimate gets reliably close to fifty. Still, the work is essentially the same as the problem of “I flipped a fair coin some number of times; it came up tails ten times. How many times did I flip it?” It might have been any number ten or above, but I most probably flipped it about twenty times, and twenty would be your best guess absent more information.)

The same problem affects working out what the probability of catching a fish is, since we do that by catching some small number of fish and then seeing how many some smaller number of them we re-catch later on. Suppose the probability of catching a fish really is , but we’re only trying to catch fish. Here’s a couple rounds of ten simulated catchings of six fish, and how many of those were re-caught:

2

0

1

0

1

0

1

0

0

1

2

0

1

1

0

3

0

0

1

1

0

1

0

1

0

0

1

0

0

0

1

0

0

0

0

0

0

0

2

1

Obviously any one of those indicates a probability ranging from 0 to 0.5 of re-catching a fish. Technically, yes, 0.125 is a number between 0 and 0.5, but it hasn’t really shown itself. But if we average out all these probabilities … well, those forty attempts give us a mean estimated probability of 0.092. This isn’t excellent but at least it’s in range. If we keep doing the experiment we’d get do better; one simulated batch of a hundred experiments turned up a mean estimated probability of 0.12833. (And there’s variations, of course; another batch of 100 attempts estimated the probability at 0.13333, and then the next at 0.10667, though if you use all three hundred of these that gets to an average of 0.12278, which isn’t too bad.)

This inconvenience amounts to a problem of working with small numbers in the original fish population, in the number of fish sampled in any one catching, and in the number of catches done to estimate their population. Small numbers tend to be problems for probability and statistics; the tools grow much more powerful and much more precise when they can work with enormously large collections of things. If the backyard pond held infinitely many fish we could have a much better idea of how many fish were in it.

We have a pond out back, and in 2013, added some goldfish to it. The goldfish, finding themselves in a comfortable spot with clean water, went about the business of making more goldfish. They didn’t have much time to do that before winter of 2013, but they had a very good summer in 2014, producing so many baby goldfish that we got a bit tired of discovering new babies. The pond isn’t quite deep enough that we could be sure it was safe for them to winter over, so we had to work out moving them to a tub indoors. This required, among other things, having an idea how many goldfish there were. The question then was: how many goldfish were in the pond?

It’s not hard to come up with a maximum estimate: a goldfish needs some amount of water to be healthy. Wikipedia seems to suggest a single fish needs about twenty gallons — call it 80 liters — and I’ll accept that since it sounds plausible enough and it doesn’t change the logic of the maximum estimate if the number is actually something different. The pond’s about ten feet across, and roughly circular, and not quite two feet deep. Call that a circular cylinder, with a diameter of three meters, and a depth of two-thirds of a meter, and that implies a volume of about pi times (3/2) squared times (2/3) cubic meters. That’s about 4.7 cubic meters, or 4700 liters. So there probably would be at most 60 goldfish in the pond. Could the goldfish have reached the pond’s maximum carrying capacity that quickly? Easily; you would not believe how fast goldfish will make more goldfish given fresh water and a little warm weather.

It can be a little harder to quite believe in the maximum estimate. For one, smaller fish don’t need as much water as bigger ones do and the baby fish are, after all, small. Or, since we don’t really know how deep the pond is — it’s not a very regular bottom, and it’s covered with water — might there be even more water and thus capacity for even more fish? That might sound ridiculous but consider: an error of two inches in my estimate of the pond’s depth amounts to a difference of 350 liters or room for four or five fish.

We can turn to probability, though. If we have some way of catching fish — and we have; we’ve got a wire trap and a mesh trap, which we’d use for bringing in fish — we could set them out and see how many fish we can catch. If we suppose there’s a certain probability of catching any one fish, and if there are fish in the pond any of which might be caught, then we could expect that some number fish are going to be caught. So if, say, we have a one-in-three chance of catching a fish, and after trying we’ve got some number fish — let’s say there were 8 caught, so we have some specific number to play with — we could conclude that there must have been about or 24 fish in the population to catch.

This does bring up the problem of how to guess what the probability of catching any one fish is. But if we make some reasonable-sounding assumptions we can get an estimate of that: set out the traps and catch some number, call it , of fish. Then set them back and after they’ve had time to recover from the experience, put the traps out again to catch fish again. We can expect that of that bunch there will be some number, call it , of the fish we’d previously caught. The ratio of the fish we catch twice to the number of fish we caught in the first place should be close to the chance of catching any one fish.

So let’s lay all this out. If there are some unknown number fish in the pond, and there is a chance of of any one fish being caught, and we’ve caught in seriously trying fish, then: and therefore .

For example, suppose in practice we caught ten fish, and were able to re-catch four of them. Then in trying seriously we caught twelve fish. From this we’d conclude that and therefore there are about fish in the pond.

Or if in practice we’d caught twelve fish, five of them a second time, and then in trying seriously we caught eleven fish. Then since we get an estimate of or call it 26 fish in the pond.

Or for another variation: suppose the first time out we caught nine fish, and the second time around, catching another nine, we re-caught three of them. If we’re feeling a little lazy we can skip going around and catching fish again, and just use the figures that and from that conclude there are about fish in the pond.

So, in principle, if we’ve made assumptions about the fish population that are right, or at least close enough to right, we can estimate what the fish population is without having to go to the work of catching every single one of them.

Since this is a generally useful scheme for estimating a population let me lay it out in an easy-to-follow formula.

To estimate the size of a population of things, assuming that they are all equally likely to be detected by some system (being caught in a trap, being photographed by someone at a spot, anything), try this:

Catch some particular number of the things. Then let them go back about their business.

Catch another of them. Count the number of them that you caught before.

The chance of catching one is therefore about .

Catch some number of the things.

Since — we assume — every one of the things had the same chance of being caught, and since we caught of them, then we estimate there to be of the things to catch.

Warning! There is a world of trouble hidden in that “we assume” on the last step there. Do not use this for professional wildlife-population-estimation until you have fully understood those two words.

Friedrich uses logarithms to work it out, and this is one of the things logarithms are good for in these days when you don’t generally need them to do multiplications and divisions. You can look at logarithms as letting you evaluate the lengths of numbers — how many digits they need to work out — rather than the numbers themselves, and this brings to the field of accessibility numbers that would otherwise be too big to work with, even on the calculator. (Another thing logarithms are good for is that they’re quite nice to work with if you have to do calculus, so once you’re comfortable with them, you start looking for chances to slip them into analysis.)

One nagging little point about Friedrich’s work, though, is that you need to know the logarithm of 3 to work it out. (Also you need the logarithm of 10, or you could try using the common logarithm — the logarithm base ten — of 3 instead.) For finding the actual number that’s fine; trying to get this answer with any precision without looking up the logarithm of 3 is quirky if not crazy.

But what if you want to do this purely by the joys of mental arithmetic? Could you work out without finding a table of logarithms? Obviously you can’t if you want a really precise answer, and here counts as precise, but could you at least get a good idea of how big a number it is?