There were just enough mathematically-themed comic strips last week to make two editions for this coming week. All going well I’ll run the other half on either Wednesday or Thursday. There is a point that isn’t quite well, which is that one of the comics is in dubious taste. I’ll put that at the end, behind a more specific content warning. In the meanwhile, you can find this and hundreds of other Reading the Comics posts at this link.
Thaves’s Frank and Ernest for the 11th is wordplay, built on the conflation of “negative” as in numbers and “negative” as in bad. I’m not sure the two meanings are unrelated. The word ‘negative’ itself derives from the Latin word meaning to deny, which sounds bad. It’s easy to see why the term would attach to what we call negative numbers. A number plus its negation leaves us zero, a nothing. But it does make the negative numbers sound like bad things to have around, or to have to deal with. The convention that a negative number is less than zero implies that the default choice for a number is one greater than zero. And the default choice is usually seen as the good one, with everything else a falling-away. Still, -7 is as legitimate a number as 7 is; it’s we who think one is better than another.
J C Duffy’s Lug Nuts for the 11th has the Dadaist panel present prime numbers as a way to communicate. I suspect Duffy’s drawing from speculations about how to contact alien intelligences. One problem with communicating with the truly alien is how to recognize there is a message being sent. A message too regular will look like a natural process, one conveying no more intelligence than the brightness which comes to most places at dawn and darkness coming at sunset. A message too information-packed, peculiarly, looks like random noise. We need an intermediate level. A signal that it’s easy to receive, and that is too hard to produce by natural processes.
Prime numbers seem like a good compromise. An intelligence that understands arithmetic will surely notice prime numbers, or at least work out quickly what’s novel about this set of numbers once given them. And it’s hard to imagine an intelligence capable of sending or receiving interplanetary signals that doesn’t understand arithmetic. (Admitting that yes, we might be ruling out conversational partners by doing this.) We can imagine a natural process that sends out (say) three pulses and then rests, or five pulses and rests. Or even draws out longer cycles: two pulses and a rest, three pulses and a rest five pulses and a rest, and then a big rest before restarting the cycle. But the longer the string of prime numbers, the harder it is to figure a natural process that happens to hit them and not other numbers.
We think, anyway. Until we contact aliens we won’t really know what it’s likely alien contact would be like. Prime numbers seem good to us, but — even if we stick to numbers — there’s no reason triangular numbers, square numbers, or perfect numbers might not be as good. (Well, maybe not perfect numbers; there aren’t many of them, and they grow very large very fast.) But we have to look for something particular, and this seems like a plausible particularity.
Charles Schulz’s Peanuts Begins for the 11th is an early strip, from the days when Lucy would look to Charlie Brown for information. And it’s a joke built on conflating ‘zero’ with ‘nothing’. Lucy’s right that zero times zero has to be something. That’s how multiplication works. That the number zero is something? That’s a tricky concept. I think being mathematically adept can blind one to how weird that is. If you’re used to how zero is the amount of a thing you have to have nothing of that thing, then we start to see what’s weird about it.
But I’m not sure the strip quite sets that up well. I think if Charlie Brown had answered that zero times zero was “nothing” it would have been right (or right enough) and Lucy’s exasperation would have flowed more naturally. As it is? She must know that zero is “nothing”; but then why would she figure “nothing times nothing” has to be something? Maybe not; it would have left Charlie Brown less reason to feel exasperated or for the reader to feel on Charlie Brown’s side. Young Lucy’s leap to “three” needs to be at least a bit illogical to make any sense.
Now to the last strip and the one I wanted to warn about. It alludes to gun violence and school shootings. If you don’t want to deal with that, you’re right. There’s other comic strips to read out there. And this for a comic that ran on the centennial of Armistice Day, which has to just be an oversight in scheduling the (non-plot-dependent) comic.
There was something in common in two of the last five comic strips worth attention from last week. That’s good enough to give the essay its name.
Greg Cravens’s The Buckets for the 8th showcases Toby discovering the point of letters in algebra. It’s easy to laugh at him being ignorant. But the use of letters this way is something it’s easy to miss. You need first to realize that we don’t need to have a single way to represent a number. Which is implicit in learning, say, that you can write ‘7’ as the Roman numeral ‘VII’ or so, but I’m not sure that’s always clear. And realizing that you could use any symbol to write out ‘7’ if you agree that’s what the symbol means? That’s an abstraction tossed onto people who often aren’t really up for that kind of abstraction. And that we can have a symbol for “a number whose identity we don’t yet know”? Or even “a number whose identity we don’t care about”? Don’t blame someone for rearing back in confusion at this.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 8th talks about vectors and scalars. And about the little ways that instructors in one subject can sabotage one another. In grad school I was witness to the mathematics department feeling quite put-upon by the engineering departments, who thought we were giving their students inadequate calculus training. Meanwhile we couldn’t figure out what they were telling students about calculus except that it was screwing up their understanding.
To a physicist, a vector is a size and a direction together. (At least until they get seriously into mathematical physics when they need a more abstract idea.) A scalar is a number. Like, a real-valued number such as ‘4’. Maybe a complex-valued number such as ‘4 + 6i’. Vectors are great because a lot of physics problems become easier when thought of in terms of directions and amounts in that direction.
A mathematician would start out with vectors and scalars like that. But then she’d move into a more abstract idea. A vector, to a mathematician, is a thing you can add to another vector and get a vector out. A scalar is something that’s not a vector but that, multiplied by a vector, gets you a vector out. This sounds circular. But by defining ‘vector’ and ‘scalar’ in how they interact with each other we get a really sweet flexibility. We can use the same reasoning — and the same proofs — for lots of things. Directions, yes. But also matrices, and continuous functions, and probabilities of events, and more. That’s a bit much to give the engineering student who’s trying to work out some problem about … I don’t know. Whatever they do over in that department. Truss bridges or electrical circuits or something.
Mark Leiknes’s Cow and Boy for the 9th is really about misheard song lyrics, a subject that will never die now that we don’t have the space to print lyrics in the album lining anymore, or album linings. But it has a joke resonant with that of The Buckets, in supposing that algebra is just some bunch of letters mixed up with numbers. And Cow and Boy was always a strip I loved, as baffling as it might be to a casual reader. It had a staggering number of running jokes, although not in this installment.
Greg Evans’s Luann Againn for the 9th shows Brad happy to work out arithmetic when it’s for something he’d like to know. The figure Luan gives is ridiculously high, though. If he needs 500 hairs, and one new hair grows in each week, then that’s a little under ten years’ worth of growth. Nine years and a bit over seven months to be exact. If a moustache hair needs to be a half-inch long, and it grows at 1/8th of an inch per month, then it takes four months to be sufficiently long. So in the slowest possible state it’d be nine years, eleven months. I can chalk Luann’s answer up to being snidely pessimistic about his hair growth. But his calculator seems to agree and that suggests something went wrong along the way.
John Zakour and Scott Roberts’s Maria’s Day for the 9th is a story problem joke. It looks to me like a reasonable story problem, too: the distance travelled and the speed are reasonable, and give sensible numbers. The two stops add a bit of complication that doesn’t seem out of line. And the kid’s confusion is fair enough. It takes some experience to realize that the problem splits into an easy part, a hard part, and an easy part. The first easy part is how long the stops take all together. That’s 25 minutes. The hard part is realizing that if you want to know the total travel time it doesn’t matter when the stops are. You can find the total travel time by adding together the time spent stopped with the time spent driving. And the other easy part is working out how long it takes to go 80 miles if you travel at 55 miles per hour. That’s just a division. So find that and add to it the 25 minutes spent at the two stops.
Mathematics has a reputation for obscurity. It’s full of jargon. All these weird, technical terms. Properties that mathematicians take to be obvious but that normal people find baffling. “The only word I understood was `the’,” is the feedback mathematicians get when they show their family their thesis. I’m happy to share one that is not. This is one of those principles that anyone can understand. It’s so accessible that people might ask how it’s even mathematics.
The Pigeonhole Principle is usually put something like this. If you have more pigeons than there are different holes to nest them in, then at least one pigeonhole has to hold more than one pigeon. This is if the speaker wishes to keep pigeons in the discussion and is assuming there’s a positive number of both pigeons and holes. Tying a mathematical principle to something specific seems odd. We don’t talk about addition as apples put together or divided among friends. Not after elementary school anyway. Not once we’ve trained our natural number sense to work with abstractions.
If we want to make it abstract we can. Put it as “if you have more objects to put in boxes then you have boxes, then at least one box must hold more than one object”. In this form it is known as the Dirichlet Box Principle. Dirichlet here is Johan Peter Gustav Lejeune Dirichlet. He’s one of the seemingly infinite number of great 19th-Century French-German mathematicians. His family name was “Lejeune Dirichlet”, so his surname is an example of his own box principle. Everyone speaks of him as just Dirichlet, though. And they speak of him a lot, for stuff in mathematical physics, in thermodynamics, in Fourier transforms, in number theory (he proved two specific cases of Fermat’s Last Theorem), and in probability.
Still, at least in my experience, it’s “pigeonhole principle”. I don’t know why pigeons. It would be as good a metaphor to speak of horses put in stalls, or letters put in mailboxes, or pairs of socks put in hotel drawers. Perhaps it’s a reflection of the long history of breeding pigeons. That they’re familiar, likable animals, despite invective. That a bird in a cubby-hole seems like a cozy, pleasant image.
The pigeonhole principle is one of those neat little utility theorems. I think of it as something handy for existence proofs. These are proofs where you show there must be a thing. They don’t typically tell you what the thing is, or even help you to find it. They promise there is something to find.
Some of its uses seem too boring to bother proving. Pick five cards from a standard deck of cards; at least two will be the same suit. There are at least two non-bald people in Philadelphia who have the same number of hairs on their heads. Some of these uses seem interesting enough to prove, but worth nothing more than a shrug and a huh. Any 27-word sequence in the Constitution of the United States includes at least two words that begin with the same letter. Also at least two words that end with the same letter. If you pick any five integers from 1 to 8 (inclusive), then at least two of them will sum to nine.
Some uses start feeling unsettling. Draw five dots on the surface of an orange. It’s always possible to cut the orange in half in such a way that four points are on the same half. (This supposes that a point on the cut counts as being on both halves.)
Pick a set of 100 different whole numbers. It is always possible to select fifteen of these numbers, so that the difference between any pair of these select fifteen is some whole multiple of 7.
Select six people. There is always a triad of three people who all know one another, or who are all strangers to one another. (This supposes that “knowing one another” is symmetric. Real world relationships are messier than this. I have met Roger Penrose. There is not the slightest chance he is aware of this. Do we count as knowing one another or not?)
Some seem to transcend what we could possibly know. Drop ten points anywhere along a circle of diameter 5. Then we can conclude there are at least two points a distance of less than 2 from one another.
Drop ten points into an equilateral triangle whose sides are all length 1. Then there must be at least two points that are no more than distance apart.
Start with any lossless data compression algorithm. Your friend with the opinions about something called “Ubuntu Linux” can give you one. There must be at least one data set it cannot compress. Your friend is angry about this fact.
Take a line of length L. Drop on it some number of points n + 1. There is some shortest length between consecutive points. What is the largest possible shortest-length-between-points? It is the distance .
As I say, this won’t help you find the examples. You need to check the points in your triangle to see which ones are close to one another. You need to try out possible sets of your 100 numbers to find the ones that are all multiples of seven apart. But you have the assurance that the search will end in success, which is no small thing. And many of the conclusions you can draw are delights: results unexpected and surprising and wonderful. It’s great mathematics.
There’s two types of comics for the second of last week’s review. There’s some strips that are reruns. There’s some that just use mathematics as a shorthand for something else. There’s four strips in all.
John Deering’s Strange Brew for the 6th uses mathematics as shorthand for demonstrating intelligence. There’s no making particular sense out of the symbols, of course. And I’d think it dangerous that Lucky seems to be using both capital X and lowercase x in the same formula. There’s often times one does use the capital and lowercase versions of a letter in a formula. This is usually something like “x is one element of the set X, which is all the possible candidates for some thing”. In that case, you might get the case wrong, but context would make it clear what you meant. But, yes, sometimes there’s no sensible alternative and then you have to be careful.
Randy Glasbergen’s Glasbergen Cartoons for the 6th uses mathematics as shorthand for a hard subject. It’s certainly an economical notation. Alas, you don’t just learn from your mistakes. You learn from comparing your mistakes to a correct answer. And thinking about why you made the mistakes you did, and how to minimize or avoid those mistakes again.
So how would I do this problem? Well, carrying out the process isn’t too hard. But what do I expect the answer to be, roughly? To me, I look at this and reason: 473 is about 500. So 473 x 17 is about 500 x 17. 500 x 17 is 1000 times eight-and-a-half. So start with “about 8500”. That’s too high, obviously. I can do better. 8500 minus some correction. What correction? Well, 473 is roughly 500 minus 25. So I’ll subtract 25 times 17. Which isn’t hard, because 25 times 4 is 100. So 25 times 17? That’s 25 times 16 plus 25 times 1. 25 times 16 is 100 times 4. So 25 times 17 is 425. 8500 minus 425 is 8075. I’m still a bit high, by 2 times 17. 2 times 17 is 34. So subtract 34 from 8075: it should be about 8041.
John Zakour and Scott Roberts’s Maria’s Day for the 7th is a joke built on jargon. Every field has its jargon. Some of it will be safely original terms: people’s names (“Bessel function”) or synthetic words (“isomorphism”) that can’t be easily confused with everyday language. But some of it will be common terms given special meaning. “Right” angles and “right” triangles. “Normal” numbers. “Group”. “Right” as a description for angles and triangles goes back a long way, at least to — well, Merriam-Webster.com says 15th century. But EtymologyOnline says late 14th century. Neither offers their manuscripts. I’ll chalk it up to differences in how they interpret the texts. And possibly differences in whether they would count, say, a reference to “a right angle” written in French or German rather than in English directly.
Richard Thompson’s Richard’s Poor Almanac for the 7th has been run before. It references the Infinite Monkey Theorem. The monkeys this time around write up a treasury of Western Literature, not merely the canon of Shakespeare. That’s at least as impressive a feat. Also, while this is a rerun — sad to say Richard Thompson died in 2016, and was forced to retire from drawing before that — his work was fantastic and deserves attention.
I am surprised to have had no suggestions for an ‘O’ letter. I’m glad to take a free choice, certainly. It let me get at one of those fields I didn’t specialize in, but could easily have. And let me mention that while I’m still taking suggestions for the letters P through T, each other letter has gotten at least one nomination. I can be swayed by a neat term, though, so if you’ve thought of something hard to resist, try me. And later this month I’ll open up the letters U through Z. Might want to start thinking right away about what X, Y, and Z could be.
This is another term from graph theory, one of the great mathematical subjects for doodlers. A graph, here, is made of two sets of things. One is a bunch of fixed points, called ‘vertices’. The other is a bunch of curves, called ‘edges’. Every edge starts at one vertex and ends at one vertex. We don’t require that every vertex have an edge grow from it.
Already you can see why this is a fun subject. It models some stuff really well. Like, anything where you have a bunch of sources of stuff, that come together and spread out again? Chances are there’s a graph that describes this. There’s a compelling all-purpose interpretation. Have vertices represent the spots where something accumulates, or rests, or changes, or whatever. Have edges represent the paths along which something can move. This covers so much.
The next step is a “directed graph”. This comes from making the edges different. If we don’t say otherwise we suppose that stuff can move along an edge in either direction. But suppose otherwise. Suppose there are some edges that can be used in only one direction. This makes a “directed edge”. It’s easy to see in graph theory networks of stuff like city streets. Once you ponder that, one-way streets follow close behind. If every edge in a graph is directed, then you have a directed graph. Moving from a regular old undirected graph to a directed graph changes everything you’d learned about graph theory. Mostly it makes things harder. But you get some good things in trade. We become able to model sources, for example. This is where whatever might move comes from. Also sinks, which is where whatever might move disappears from our consideration.
You might fear that by switching to a directed graph there’s no way to have a two-way connection between a pair of vertices. Or that if there is you have to go through some third vertex. I understand your fear, and wish to reassure you. We can get a two-way connection even in a directed graph: just have the same two vertices be connected by two edges. One goes one way, one goes the other. I hope you feel some comfort.
What if we don’t have that, though? What if the directed graph doesn’t have any vertices with a pair of opposite-directed edges? And that, then, is an oriented graph. We get the orientation from looking at pairs of vertices. Each pair either has no edge connecting them, or has a single directed edge between them.
There’s a lot of potential oriented graphs. If you have three vertices, for example, there’s seven oriented graphs to make of that. You’re allowed to have a vertex not connected to any others. You’re also allowed to have the vertices grouped into a couple of subsets, and connect only to other vertices in their own subset. This is part of why four vertices can give you 42 different oriented graphs. Five vertices can give you 582 different oriented graphs. You can insist on a connected oriented graph.
A connected graph is what you guess. It’s a graph where there’s no vertices off on their own, unconnected to anything. There’s no subsets of vertices connected only to each other. This doesn’t mean you can always get from any one vertex to any other vertex. The directions might not allow you to that. But if you’re willing to break the laws, and ignore the directions of these edges, you could then get from any vertex to any other vertex. Limiting yourself to connected graphs reduces the number of oriented graphs you can get. But not by as much as you might guess, at least not to start. There’s only one connected oriented graph for two vertices, instead of two. Three vertices have five connected oriented graphs, rather than seven. Four vertices have 34, rather than 42. Five vertices, 535 rather than 582. The total number of lost graphs grows, of course. The percentage of lost graphs dwindles, though.
There’s something more. What if there are no unconnected vertices? That is, every pair of vertices has an edge? If every pair of vertices in a graph has a direct connection we call that a “complete” graph. This is true whether the graph is directed or not. If you do have a complete oriented graph — every pair of vertices has a direct connection, and only the one direction — then that’s a “tournament”. If that seems like a whimsical name, consider one interpretation of it. Imagine a sports tournament in which every team played every other team once. And that there’s no ties. Each vertex represents one team. Each edge is the match played by the two teams. The direction is, let’s say, from the losing team to the winning team. (It’s as good if the direction is from the winning team to the losing team.) Then you have a complete, oriented, directed graph. And it represents your tournament.
And that delights me. A mathematician like me might talk a good game about building models. How one can represent things with mathematical constructs. Here, it’s done. You can make little dots, for vertices, and curved lines with arrows, for edges. And draw a picture that shows how a round-robin tournament works. It can be that direct.
My next Fall 2018 Mathematics A-To-Z post should be Friday. It’ll be available at this link, as are the rest of these glossary posts. And I’ve got requests for the next letter. I just have to live up to at least one of them.
Jason Chatfield’s Ginger Meggs for the 5th is a joke about resisting the story problem. I’m surprised by the particulars of this question. Turning an arithmetic problem into counts of some number of particular things is common enough and has a respectable history. But slices of broccoli quiche? I’m distracted by the choice, and I like quiche. It’s a weird thing for a kid to have, and a weird amount for anybody to have.
JC Duffy’s Lug Nuts for the 5th uses mathematics as a shorthand for intelligence. And it particularly uses π as shorthand for mathematics. There’s a lot of compressed concepts put into this. I shouldn’t be surprised if it’s rerun come mid-March.
Tom Toles’s Randolph Itch, 2 am for the 5th I’ve highlighted before. It’s the pie chart joke. It will never stop amusing me, but I suppose I should take Randolph Itch, 2 am out of my rotation of comics I read to include here.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 5th is a logic puzzle joke. And a set theory joke. Dad is trying to argue he can’t be surprised by his gift because it’ll belong to one of two sets of things. And he receives nothing. This ought to defy his expectations, if we think of “nothing” as being “the empty set”. The empty set is an indispensable part of set theory. It’s a set that has no elements, has nothing in it. Then suppose we talk about what it means for one set to be contained in another. Take what seems like an uncontroversial definition: set A is contained in set B if there’s nothing in A which is not also in B. Then the empty set is contained inside every set. So Dad, having supposed that he can’t be surprised, since he’d receive either something that is “socks” or something that is “not-socks”, does get surprised. He gets the one thing that is both “socks” and “not-socks” simultaneously.
I hate to pull this move a third time in one week (see here and here), but the logic of the joke doesn’t work for me. I’ll go along with “nothing” as being “the empty set” for these purposes. And I’ll accept that “nothing” is definitely “not-socks”. But to say that “nothing” is also “socks” is … weird, unless you are putting it in the language of set theory. I think the joke would be saved if it were more clearly established that Dad should be expecting some definite thing, so that no-thing would defy all expectations.
“Nothing” is a difficult subject to treat logically. I have been exposed a bit to the thinking of professional philosophers on the subject. Not enough that I feel I could say something non-stupid about the subject. But enough to say that yeah, they’re right, we have a really hard time describing “nothing”. The null set is better behaved. I suppose that’s because logicians have been able to tame it and give it some clearly defined properties.
If there is a theme to the last comic strips from the previous week, it’s that kids find arithmetic hard. That’s a title for you.
Bill Watterson’s Calvin and Hobbes for the 2nd is one of the classics, of course. Calvin’s made the mistake of supposing that mathematics is only about getting true answers. We’ll accept the merely true, if that’s what we can get. But we want interesting. Which is stuff that’s not just true but is unexpected or unforeseeable in some way. We see this when we talk about finding a “proper” answer, or subset, or divisor, or whatever. Some things are true for every question, and so, who cares?
Also, is it really true that Calvin doesn’t know any of his homework problems? It’s possible, but did he check?
Were I grading, I would accept an “I don’t know”, at least for partial credit, in certain conditions. Those involve the student writing out what they would like to do to try to solve the problem. If the student has a fair idea of something that ought to find a correct answer, then the student’s showing some mathematical understanding. But there are times that what’s being tested is proficiency at an operation, and a blank “I don’t know” would not help much with that.
Patrick Roberts’s Todd the Dinosaur for the 2nd has an arithmetic cameo. Fractions, particularly. They’re mentioned as something too dull to stay awake through. So for the joke’s purpose this could have been any subject that has an exposition-heavy segment. Fractions do have more complicated rules than adding whole numbers do. And introducing those rules can be hard. But anything where you introduce rules instead of showing what you can do with them is hard. I’m thinking here of several times people have tried to teach me board games by listing all the rules, instead of setting things up and letting me ask “what am I allowed to do now?” the first couple turns. I’m not sure how that would translate to fractions, but there might be something.
John Zakour and Scott Roberts’s Maria’s Day for the 2nd has another of Maria’s struggles with arithmetic. It’s presented as a challenge so fierce it can defeat even superheroes. Could be any subject, really. It’s hard to beat the visual economy of having it be a division problem, though.
Rick Kirkman and Jerry Scott’s Baby Blues for the 3rd shows a bit of youthful enthusiasm. Hammie’s parents would rather that enthusiasm be put to memorizing multiplication facts. I’m not sure this would match the fun of building stuff. But I remember finding patterns inside the multiplication table fascinating. Like how you could start from a perfect square and get the same sequence of numbers as you moved out along a diagonal. Or tracing out where the same number appeared in different rows and columns, like how just everything could multiply into 24. Might be worth playing with some.
I had a free choice of topics for today! Nobody had a suggestion for the letter ‘N’, so, I’ll take one of my own. If you did put in a suggestion, I apologize; I somehow missed the comment in which you did. I’ll try to do better in future.
Nearest Neighbor Model.
Why are restaurants noisy?
It’s one of those things I wondered while at a noisy restaurant. I have heard it is because restauranteurs believe patrons buy more, and more expensive stuff, in a noisy place. I don’t know that I have heard this correctly, nor that what I heard was correct. I’ll leave it to people who work that end of restaurants to say. But I wondered idly whether mathematics could answer why.
It’s easy to form a rough model. Suppose I want my brilliant words to be heard by the delightful people at my table. Then I have to be louder, to them, than the background noise is. Fine. I don’t like talking loudly. My normal voice is soft enough even I have a hard time making it out. And I’ll drop the ends of sentences when I feel like I’ve said all the interesting parts of them. But I can overcome my instinct if I must.
The trouble comes from other people thinking of themselves the way I think of myself. They want to be heard over how loud I have been. And there’s no convincing them they’re wrong. If there’s bunches of tables near one another, we’re going to have trouble. We’ll each by talking loud enough to drown one another out, until the whole place is a racket. If we’re close enough together, that is. If the tables around mine are empty, chances are my normal voice is enough for the cause. If they’re not, we might have trouble.
So this inspires a model. The restaurant is a space. The tables are set positions, points inside it. Each table is making some volume of noise. Each table is trying to be louder than the background noise. At least until the people at the table reach the limits of their screaming. Or decide they can’t talk, they’ll just eat and go somewhere pleasant.
Making calculations on this demands some more work. Some is obvious: how do you represent “quiet” and “loud”? Some is harder: how far do voices carry? Grant that a loud table is still loud if you’re near it. How far away before it doesn’t sound loud? How far away before you can’t hear it anyway? Imagine a dining room that’s 100 miles long. There’s no possible party at one end that could ever be heard at the other. Never mind that a 100-mile-long restaurant would be absurd. It shows that the limits of people’s voices are a thing we have to consider.
There are many ways to model this distance effect. A realistic one would fall off with distance, sure. But it would also allow for echoes and absorption by the walls, and by other patrons, and maybe by restaurant decor. This would take forever to get answers from, but if done right it would get very good answers. A simpler model would give answers less fitted to your actual restaurant. But the answers may be close enough, and let you understand the system. And may be simple enough that you can get answers quickly. Maybe even by hand.
And so I come to the “nearest neighbor model”. The common English meaning of the words suggest what it’s about. We get it from models, like my restaurant noise problem. It’s made of a bunch of points that have some value. For my problem, tables and their noise level. And that value affects stuff in some region around these points.
In the “nearest neighbor model”, each point directly affects only its nearest neighbors. Saying which is the nearest neighbor is easy if the points are arranged in some regular grid. If they’re evenly spaced points on a line, say. Or a square grid. Or a triangular grid. If the points are in some other pattern, you need to think about what the nearest neighbors are. This is why people working in neighbor-nearness problems get paid the big money.
Suppose I use a nearest neighbor model for my restaurant problem. In this, I pretend the only background noise at my table is that of the people the next table over, in each direction. Two tables over? Nope. I don’t hear them at my table. I do get an indirect effect. Two tables over affects the table that’s between mine and theirs. But vice-versa, too. The table that’s 100 miles away can’t affect me directly, but it can affect a table in-between it and me. And that in-between table can affect the next one closer to me, and so on. The effect is attenuated, yes. Shouldn’t it be, if we’re looking at something farther away?
This sort of model is easy to work with numerically. I’m inclined toward problems that work numerically. Analytically … well, it can be easy. It can be hard. There’s a one-dimensional version of this problem, a bunch of evenly-spaced sites on an infinitely long line. If each site is limited to one of exactly two values, the problem becomes easy enough that freshman physics majors can solve it exactly. They don’t, not the first time out. This is because it requires recognizing a trigonometry trick that they don’t realize would be relevant. But once they know the trick, they agree it’s easy, when they go back two years later and look at it again. It just takes familiarity.
This comes up in thermodynamics, because it makes a nice model for how ferromagnetism can work. More realistic problems, like, two-dimensional grids? … That’s harder to solve exactly. Can be done, though not by undergraduates. Three-dimensional can’t, last time I looked. Weirdly, four-dimensional can. You expect problems to only get harder with more dimensions of space, and then you get a surprise like that.
The nearest-neighbor-model is a first choice. It’s hardly the only one. If I told you there were a next-nearest-neighbor model, what would you suppose it was? Yeah, you’d be right. As long as you supposed it was “things are affected by the nearest and the next-nearest neighbors”. Mathematicians have heard of loopholes too, you know.
As for my restaurant model? … I never actually modelled it. I did think about the model. I concluded my model wasn’t different enough from ferromagnetism models to need me to study it more. I might be mistaken. There may be interesting weird effects caused by the facts of restaurants. That restaurants are pretty small things. That they can have echo-y walls and ceilings. That they can have sound-absorbing things like partial walls or plants. Perhaps I gave up too easily when I thought I knew the answer. Some of my idle thoughts end up too idle.
The edition title says it all. Comic Strip Master Command sent me enough strips the past week for two editions and I made an unhappy discovery about one of the comics in today’s.
Dave Coverly’s Speed Bump for the 28th is your anthropomorphic-numerals joke for the week. We get to know the lowest common denominator from fractions. It’s easier to compute anything with a fraction in it if you can put everything under a common denominator. But it’s also — usually — easier to work with smaller denominators than larger ones. It’s always okay to multiply a number by 1. It may not help, but it can always be done. This has the result of multiplying both the numerator and denominator by the same number. So suppose you have something that’s written in terms of sixths, and something else written in terms of eighths. You can multiply the first thing by four-fourths, and the second thing by three-thirds. Then both fractions are in terms of 24ths and your calculation is, probably, easier.
So this strip is the rare one where I have to say the joke doesn’t work on mathematical grounds. Coverly was mislead by the association between “lowest” and “smallest”. 2 is going to be the lowest common denominator very rarely. Everything in the problem needs to be in terms of even denominators to start with, and even that won’t guarantee it. I hate to do that, since the point of a comic strip is humor and getting any mathematics right is a bonus. But in this case, knowing the terminology shatters the joke. Coverly would have a mathematically valid joke were 9 offering the consolation “you’re not always the greatest common divisor”, the largest number that goes into a set of numbers. But nobody thinks being called the “greatest” anything ever needs consolation, so the joke would fail all but mathematics class.
Randy Glasbergen’s Glasbergen Cartoons for the 29th is a joke of the why-learn-mathematics model. “Because we always have done this” is not a reason compelling by the rules of deductive logic. It can have great practical value. Experience can encode things which are hard to state explicitly, or to untangle from one another. And an experienced system will have workarounds for the most obvious problems, ones that a new system will not have. And any attempt at educational reform, however well-planned or meant, must answer parents’ reasonable question of why their child should be your test case.
I do sometimes see algebra attacked as being too little-useful for the class time given. I could see good cases made for spending the time on other fields of mathematics. (Probability and statistics always stands out as potentially useful; the subjects were born from things people urgently needed to know.) I’m not competent to judge those arguments and so shall not.
Carl Skanberg’s That New Carl Smell for the 29th is a riff on jokes about giving more than 100%. Interpreting this giving-more-than-everything as running a deficit is a reasonable one. I’ve given my usual talk about “100% of what?” enough times now; I don’t need to repeat it until I think of something fresh to say.
Jeffrey Caulfield and Alexandre Rouillard’s Mustard and Boloney for the 30th uses mathematics — story problems, specifically — as icons of intelligence. I can’t speak to the Mensa experience, but intellectual types trying to out-do each other? Yes, that’s a thing that happens. I mostly dodge attempts to put me to a fun mathematics puzzle. I’m embarrassed by how long it can take me to actually do one of these, when put on the spot. (I have a similar reaction to people testing my knowledge of trivia in the stuff I actually do know a ridiculous amount about.) Mostly I hope Dave Coverly doesn’t think I’m being this kid.
Two commenters suggested the topic for today’s A to Z post. I suspect I’d have been interested in it if only one had. (Although Dina Yagoditch’s suggestion of the Menger Sponge is hard to resist.) But a double domination? The topic got suggested by Mr Wu, author of MathTuition88, and by John Golden, author of Math Hombre. My thanks to all for interesting things to think about.
So you know how in the first car you ever owned the alternator was always going bad? If you’re lucky, you reach a point where you start owning cars good enough that the alternator is not the thing always going bad. Once you’re there, congratulations. Now the thing that’s always going bad in your car will be the manifold. That one’s for my dad.
Manifolds are a way to do normal geometry on weird shapes. What’s normal geometry? It’s … you know, the way shapes work on your table, or in a room. The Euclidean geometry that we’re so used to that it’s hard to imagine it not working. Why worry about weird shapes? They’re interesting, for one. And they don’t have to be that weird to count as weird. A sphere, like the surface of the Earth, can be weird. And these weird shapes can be useful. Mathematical physics, for example, can represent the evolution of some complicated thing as a path drawn on a weird shape. Bringing what we know about geometry from years of study, and moving around rooms, to a problem that abstract makes our lives easier.
We use language that sounds like that of map-makers when discussing manifolds. We have maps. We gather together charts. The collection of charts describing a surface can be an atlas. All these words have common meanings. Mercifully, these common meanings don’t lead us too far from the mathematical meanings. We can even use the problem of mapping the surface of the Earth to understand manifolds.
If you love maps, the geography kind, you learn quickly that there’s no making a perfect two-dimensional map of the Earth’s surface. Some of these imperfections are obvious. You can distort shapes trying to make a flat map of the globe. You can distort sizes. But you can’t represent every point on the globe with a point on the paper. Not without doing something that really breaks continuity. Like, say, turning the North Pole into the whole line at the top of the map. Like in the Equirectangular projection. Or skipping some of the points, like in the Mercator projection. Or adding some cuts into a surface that doesn’t have them, like in the Goode homolosine projection. You may recognize this as the one used in classrooms back when the world had first begun.
But what if we don’t need the whole globedone in a single map? Turns out we can do that easy. We can make charts that cover a part of the surface. No one chart has to cover the whole of the Earth’s surface. It only has to cover some part of it. It covers the globe with a piece that looks like a common ordinary Euclidean space, where ordinary geometry holds. It’s the collection of charts that covers the whole surface. This collection of charts is an atlas. You have a manifold if it’s possible to make a coherent atlas. For this every point on the manifold has to be on at least one chart. It’s okay if a point is on several charts. It’s okay if some point is on all the charts. Like, suppose your original surface is a circle. You can represent this with an atlas of two charts. Each chart maps the circle, except for one point, onto a line segment. The two charts don’t both skip the same point. All but two points on this circle are on all the maps of this chart. That’s cool. What’s not okay is if some point can’t be coherently put onto some chart.
This sad fate can happen. Suppose instead of a circle you want to chart a figure-eight loop. That won’t work. The point where the figure crosses itself doesn’t look, locally, like a Euclidean space. It looks like an ‘x’. There’s no getting around that. There’s no atlas that can cover the whole of that surface. So that surface isn’t a manifold.
But many things are manifolds nevertheless. Toruses, the doughnut shapes, are. M&oumb;bius strips and Klein bottles are. Ellipsoids and hyperbolic surfaces are, or at least can be. Mathematical physics finds surfaces that describe all the ways the planets could move and still conserve the energy and momentum and angular momentum of the solar system. That cheesecloth surface stretched through 54 dimensions, is a manifold. There are many possible atlases, with many more charts. But each of those means we can, at least locally, for particular problems, understand them the same way we understand cutouts of triangles and pentagons and circles on construction paper.
So to get back to cars: no one has ever said “my car runs okay, but I regret how I replaced the brake covers the moment I suspected they were wearing out”. Every car problem is easier when it’s done as soon as your budget and schedule allow.
I expected there to be a fair number of readers here in October. The A to Z project, particularly, implied that. A To Z months are exhausting, but they give me a lot of posts. And I’m as sure as I can be without actually checking that the number of posts is the biggest determining factor in how many readers I attract. There were 23 posts in October, compared to September’s 15 and my summertime usual of 12 to 14.
Then there’s the Playful Mathematics Education Blog Carnival. This posted in September — by six hours — but it was destined to bring new and, I hope, happy readers in. And that happened also. Here’s what the WordPress statistics showed me for the month:
So this was my highest-readership month since the blog started seven years ago. 2,010 page views, from 1,063 unique visitors. That’s also the greatest number of unique visitors I’ve had in one month, and the first time I’ve broken a thousand visitors. September had 1,505 page views from 874 unique visitors; August, 1,421 page views from 913 unique visitors.
There was a bit of an upswing in the number of likes: 94 of them issued in October, compared to September’s 65 and August’s 57. This is on the higher side for this year, but it is down a good bit from the comparable month two or three years ago. In June 2015, for my first A to Z, I drew over 500 likes; I don’t know where likers have gone.
There were 60 comments on the blog in October, partly people who liked or wanted to talk about A To Z topics, partly people suggesting others. It’s the greatest number of comments I’ve had in one month in two years now. September had 36 commenters; August, 27. Have to go back to March 2016 to find a month when more people said anything around here. That, too, was an A-to-Z month, and one of the handful of months when I posted something every single day.
There were a lot of popular posts this month, naturally. This might be the first time in years that none of the top five were Reading the Comics posts. The A to Z and the Playful Mathematics Education Blog Carnival squeezed out very nearly everything:
52 countries sent me any readers in August. 58 did so in September. There were 16 single-reader countries in August and 14 in September. For busy October?
Hong Kong SAR China
Macau SAR China
74 countries sending me any readers at all. 23 countries sent me a single reader for the month. Czech republic has sent me a single reader two months in a row now; Columbia, three months in a row.
Insights tells me that I started November with a total of 69,895 page views, from a logged 34,538 unique visitors. As ever, please recall the first couple years WordPress didn’t tell us anything about the unique visitor count, so for all I know there “should” be more.
In October I published 28,733 words, which is nearly double September’s total. Whew. I’d posted 142 things this year by the start of November, and gathered a total of 391 comments and 787 likes for the year to date. This averaged to 3.1 comments per post on average, up from 2.6 at the start of October. And 5.5 likes per posting, down from 5.8 at the start of October. The 142 posts through the start of November averaged 996 words each. That’s up from 946 words per post at the start of October. I’m going to crush myself beneath a pile of words that I meant to be less deep.
While putting together the last comics from a week ago I realized there was a repeat among them. And a pretty recent repeat too. I’m supposing this is a one-off, but who can be sure? We’ll get there. I figure to cover last week’s mathematically-themed comics in posts on Wednesday and Thursday, subject to circumstances.
As fits the joke, the bit of calculus in this textbook paragraph is wrong. does not equal . This is even ignoring that we should expect, with an indefinite integral like this, a constant of integration. An indefinite integral like this is equal to a family of related functions. But it’s common shorthand to write out one representative function. But the indefinite integral of is not . You can confirm that by differentiating . The result is nothing like . Differentiating an indefinite integral should get the original function back. Here are the rules you need to do that for yourself.
As I make it out, a correct indefinite integral would be:
Plus that “constant of integration” the value of which we can’t tell just from the function we want to indefinitely-integrate. I admit I haven’t double-checked that I’m right in my work here. I trust someone will tell me if I’m not. I’m going to feel proud enough if I can get the LaTeX there to display.
Stephen Beals’s Adult Children for the 27th has run already. It turned up in late March of this year. Michael Spivak’s Calculus is a good choice for representative textbook. Calculus holds its terrors, too. Even someone who’s gotten through trigonometry can find the subject full of weird, apparently arbitrary rules. And formulas like those in the above paragraph.
Rob Harrell’s Big Top for the 27th is a strip about the difficulties of splitting a restaurant bill. And they’ve not even got to calculating the tip. (Maybe it’s just a strip about trying to push the group to splitting the bill a way that lets you off cheap. I haven’t had to face a group bill like this in several years. My skills with it are rusty.)
I’ve settled to a pace of about four comics each essay. It makes for several Reading the Comics posts each week. But none of them are monsters that eat up whole evenings to prepare. Except that last week there were enough comics which made my initial cut that I either have to write a huge essay or I have to let last week’s strips spill over to Sunday. I choose that option. It’s the only way to square it with the demands of the A to Z posts, which keep creeping above a thousand words each however much I swear that this next topic is a nice quick one.
Roy Schneider’s The Humble Stumble for the 25th has some mathematics in a supporting part. It’s used to set up how strange Tommy is. Mathematics makes a good shorthand for this. It’s usually compact to write, important for word balloons. And it’s usually about things people find esoteric if not hilariously irrelevant to life. Tommy’s equation is an accurate description of what centripetal force would be needed to keep the Moon in a circular orbit at about the distance it really is. I’m not sure how to take Tommy’s doubts. If he’s just unclear about why this should be so, all right. Part of good mathematical learning can be working out the logic of some claim. If he’s not sure that Newtonian mechanics is correct — well, fair enough to wonder how we know it’s right. Spoiler: it is right. (For the problem of the Moon orbiting the Earth it’s right, at least to any reasonable precision.)
Stephan Pastis’s Pearls Before Swine for the 25th shows how we can use statistics to improve our lives. At least, it shows how tracking different things can let us find correlations. These correlations might give us information about how to do things better. It’s usually a shaky plan to act on a correlation before you have a working hypothesis about why the correlation should hold. But it can give you leads to pursue.
Eric the Circle for the 26th, this one by Vissoro, is a “two types of people in the world” joke. Given the artwork I believe it’s also riffing on the binary-arithmetic version of the joke. Which is, “there are 10 types of people in the world, those who understand binary and those who don’t”.
I got an irresistible topic for today’s essay. It’s courtesy Peter Mander, author of Carnot Cycle, “the classical blog about thermodynamics”. It’s bimonthly and it’s one worth waiting for. Some of the essays are historical; some are statistical-mechanics; many are mixtures of them. You could make a fair argument that thermodynamics is the most important field of physics. It’s certainly one that hasn’t gotten the popularization treatment it deserves, for its importance. Mander is doing something to correct that.
It is hard to think of limits without thinking of motion. The language even professional mathematicians use suggests it. We speak of the limit of a function “as x goes to a”, or “as x goes to infinity”. Maybe “as x goes to zero”. But a function is a fixed thing, a relationship between stuff in a domain and stuff in a range. It can’t change any more than January, AD 1988 can change. And ‘x’ here is a dummy variable, part of the scaffolding to let us find what we want to know. I suppose ‘x’ can change, but if we ever see it, something’s gone very wrong. But we want to use it to learn something about a function for a point like ‘a’ or ‘infinity’ or ‘zero’.
The language of motion helps us learn, to a point. We can do little experiments: if , then, what should we expect it to be for x near zero? It’s irresistible to try out the calculator. Let x be 0.1. 0.01. 0.001. 0.0001. The numbers say this f(x) gets closer and closer to 1. That’s good, right? We know we can’t just put in an x of zero, because there’s some trouble that makes. But we can imagine creeping up on the zero we really wanted. We might spot some obvious prospects for mischief: what if x is negative? We should try -0.1, -0.01, -0.001 and so on. And maybe we won’t get exactly the right answer. But if all we care about is the first (say) three digits and we try out a bunch of x’s and the corresponding f(x)’s agree to those three digits, that’s good enough, right?
This is good for giving an idea of what to expect a limit to look like. It should be, well, what it really really really looks like a function should be. It takes some thinking to see where it might go wrong. It might go to different numbers based on which side you approach from. But that seems like something you can rationalize. Indeed, we do; we can speak of functions having different limits based on what direction you approach from. Sometimes that’s the best one can say about them.
But it can get worse. It’s possible to make functions that do crazy weird things. Some of these look like you’re just trying to be difficult. Like, set f(x) equal to 1 if x is rational and 0 if x is irrational. If you don’t expect that to be weird you’re not paying attention. Can’t blame someone for deciding that falls outside the realm of stuff you should be able to find limits for. And who would make, say, an f(x) that was 1 if x was 0.1 raised to some power, but 2 if x was 0.2 raised to some power, and 3 otherwise? Besides someone trying to prove a point?
Fine. But you can make a function that looks innocent and yet acts weird if the domain is two-dimensional. Or more. It makes sense to say that the functions I wrote in the above paragraph should be ruled out of consideration. But the limit of at the origin? You get different results approaching in different directions. And the function doesn’t give obvious signs of imminent danger here.
We need a better idea. And we even have one. This took centuries of mathematical wrangling and arguments about what should and shouldn’t be allowed. This should inspire sympathy with Intro Calc students who don’t understand all this by the end of week three. But here’s what we have.
I need a supplementary idea first. That is the neighborhood. A point has a neighborhood if there’s some open set that contains it. We represent this by drawing a little blob around the point we care about. If we’re looking at the neighborhood of a real number, then this is a little interval, that’s all. When we actually get around to calculating, we make these neighborhoods little circles. Maybe balls. But when we’re doing proofs about how limits work, or how we use them to prove things, we make blobs. This “neighborhood” idea looks simple, but we need it, so here we go.
So start with a function, named ‘f’. It has a domain, which I’ll call ‘D’. And a range, which I want to call ‘R’, but I don’t think I need the shorthand. Now pick some point ‘a’. This is the point at which we want to evaluate the limit. This seems like it ought to be called the “limit point” and it’s not. I’m sorry. Mathematicians use “limit point” to talk about something else. And, unfortunately, it makes so much sense in that context that we aren’t going to change away from that.
‘a’ might be in the domain ‘D’. It might not. It might be on the border of ‘D’. All that’s important is that there be a neighborhood inside ‘D’ that contains ‘a’.
I don’t know what f(a) is. There might not even be an f(a), if a is on the boundary of the domain ‘D’. But I do know that everything inside the neighborhood of ‘a’, apart from ‘a’, is in the domain. So we can look at the values of f(x) for all the x’s in this neighborhood. This will create a set, in the range, that’s known as the image of the neighborhood. It might be a continuous chunk in the range. It might be a couple of chunks. It might be a single point. It might be some crazy-quilt set. Depends on ‘f’. And the neighborhood. No matter.
Now I need you to imagine the reverse. Pick a point in the range. And then draw a neighborhood around it. Then pick out what we call the pre-image of it. That’s all the points in the domain that get matched to values inside that neighborhood. Don’t worry about trying to do it; that’s for the homework practice. Would you agree with me that you can imagine it?
I hope so because I’m about to describe the part where Intro Calc students think hard about whether they need this class after all.
All right. Then I want something in the range. I’m going to call it ‘L’. And it’s special. It’s the limit of ‘f’ at ‘a’ if this following bit is true:
Think of every neighborhood you could pick of ‘L’. Can be big, can be small. Just has to be a neighborhood of ‘L’. Now think of the pre-image of that neighborhood. Is there always a neighborhood of ‘a’ inside that pre-image? It’s okay if it’s a tiny neighborhood. Just has to be an open neighborhood. It doesn’t have to contain ‘a’. You can allow a pinpoint hole there.
If you can always do this, however tiny the neighborhood of ‘L’ is, then the limit of ‘f’ at ‘a’ is ‘L’. If you can’t always do this — if there’s even a single exception — then there is no limit of ‘f’ at ‘a’.
I know. I felt like that the first couple times through the subject too. The definition feels backward. Worse, it feels like it begs the question. We suppose there’s an ‘L’ and then test these properties about it and then if it works we say we’re done? I know. It’s a pain when you start calculating this with specific formulas and all that, too. But supposing there is an answer and then learning properties about it, including whether it can exist? That’s a slick trick. We can use it.
Thing is, the pain is worth it. We can calculate with it and not have to out-think tricky functions. It works for domains with as many dimensions as you need. It works for limits that aren’t inside the domain. It works with domains and ranges that aren’t real numbers. It works for functions with weird and complicated domains. We can adapt it if we want to consider limits that are constrained in some way. It won’t be fooled by tricks like I put up above, the f(x) with different rules for the rational and irrational numbers.
So mathematicians shrug, and do enough problems that they get the hang of it, and use this definition. It’s worth it, once you get there.
Brian Fies’s The Last Mechanical Monster for the 24th is a repeat. I included it last October, when I first saw it on GoComics. Still, the equations in it are right, for ballistic flight. Ballistic means that something gets an initial velocity in a particular direction and then travels without any further impulse. Just gravity. It’s a pretty good description for any system where acceleration’s done for very brief times. So, things fired from guns. Rockets, which typically have thrust for a tiny slice of their whole journey and coast the rest of the time. Anything that gets dropped. Or, as in here, a mad scientist training his robot to smash his way through a bank, and getting flung so.
The symbols in the equations are not officially standardized. But they might as well be. ‘v’ here means the speed that something’s tossed into the air. It really wants to be ‘velocity’, but velocity, in the trades, carries with it directional information. And here that’s buried in ‘θ’, the angle with respect to vertical that the thing starts flight in. ‘g’ is the acceleration of gravity, near enough constant if you don’t travel any great distance over the surface of the Earth. ‘y0‘ is the height from which the thing started to fly. And so then ‘d’ becomes the distance travelled, while ‘t’ is the time it takes to travel. I’m impressed the mad scientist (the one from the original Superman cartoon, in 1941; Fies wrote a graphic novel about that man after his release from jail in the present day.)
Greg Cravens’s Hubris! for the 24th jokes about the dangers of tangled earbuds. For once, mathematics can help! There’s even a whole field of mathematics about this. Not earbuds specifically, but about knots. It’s called knot theory. I trust field was named by someone caught by surprise by the question. A knot, in this context, is made of a loop of thread that’s assumed to be infinitely elastic, so you can always stretch it out or twist it around some. And it’s frictionless, so you can slide the surface against itself without resistance. And you can push it along an end. These are properties that real-world materials rarely have.
But. They can be close enough. And knot theory tells us some great, useful stuff. Among them: your earbuds are never truly knotted. To be a knot at all, the string has to loop back and join itself. That is, it has to be like a rubber band, or maybe an infinity scarf. If it’s got loose ends, it’s no knot. It’s topologically just a straight thread with some twists made in the surface. They can come loose.
All that holds these earbuds together is the friction of the wires against each other. (That the earbud wire splits into a left and a right bud doesn’t matter, here.) They can be loosened. Let me share how.
My love owns, among other things, a marionette dragon. And once, despite it being stored properly, the threads for it got tangled, and those things are impossible to untangle on purpose. I, having had one (1) whole semester of knot theory in grad school, knew an answer. I held the marionette upside-down, by the dragon. The tangled wires and the crossed sticks that control it hung loose underneath. And then shook the puppet around. This made the wires, and the sticks, shake around. They untangled, quickly.
What held the marionette strings, and what holds earbuds, together, is just friction. It’s hard to make the wire slide loosely against itself. Shaking it around, though? That gives it some energy. That gives the wire some play. And here we have one of the handful of cases where entropy does something useful for us. There’s a limit to how tightly a wire can loop around itself. There’s no limit to how loosely it can go. Little, regular, random shakes will tend to loosen the wire. When it’s loose enough, it untangles naturally.
You can help this along. We all know how. Use a pen-point or a toothpick a needle to pry some of the wires apart. That makes the “knot” easier to remove. This works by the same principle. If you reduce how much the wire contacts itself, you reduce the friction on the wire. The wire can slide more easily into the un-knot that it truly is. The comic’s tech support guy gave up too easily.
Samson’s Dark Side of the Horse for the 25th is the Roman numerals joke for this essay. And a cute bit about coincidences between what you can spell out with Roman numerals and sounds people might make. Writing out calculations evokes peculiar, magical prowess. When they include, however obliquely, words? Or parts of words? Can’t blame people for seeing the supernatural in it.
We’re at the end of another month. So it’s a good chance to set out requests for the next several week’s worth of my mathematics A-To-Z. As I say, I’ve been doing this piecemeal so that I can keep track of requests better. I think it’s been working out, too.
If you have any mathematical topics with a name that starts N through T, let me know! I usually go by a first-come, first-serve basis for each letter. But I will vary that if I realize one of the alternatives is more suggestive of a good essay topic. And I may use a synonym or an alternate phrasing if both topics for a particular letter interest me.
Also when you do make a request, please feel free to mention your blog, Twitter feed, Mathstodon account, or any other project of yours that readers might find interesting. I’m happy to throw in a mention as I get to the word of the day.
So! I’m open for nominations. Here are the words I’ve used in past A to Z sequences. I probably don’t want to revisit them. But I will think over, if I get a request, whether I might have new opinions.
Today’s request is another from John Golden, @mathhombre on Twitter and similarly on Blogspot. It’s specifically for Kelvin — “scientist or temperature unit”, the sort of open-ended goal I delight in. I decided on the scientist. But that’s a lot even for what I honestly thought would be a quick little essay. So I’m going to take out a tiny slice of a long and amazingly fruitful career. There’s so much more than this.
Before I get into what I did pick, let me repeat an important warning about historical essays. Every history is incomplete, yes. But any claim about something being done for the first time is simplified to the point of being wrong. Any claim about an individual discovering or inventing something is simplified to the point of being wrong. Everything is more complicated and, especially, more ambiguous than this. If you do not love the challenge of working out a coherent narrative when the most discrete and specific facts are also the ones that are trivia, do not get into history. It will only break your heart and mislead your readers. With that disclaimer, let me try a tiny slice of the life of William Thomson, the Baron Kelvin.
Kelvin (the scientist).
The great thing about a magnetic compass is that it’s easy. Set the thing on an axis and let it float freely. It aligns itself to the magnetic poles. It’s easy to see why this looks like magic.
The trouble is that it’s not quite right. It’s near enough for many purposes. But the direction a magnetic compass points out to be north is not the true geographic north. Fortunately, we’ve got a fair idea just how far off north that is. It depends on where you are. If you have a rough idea where you already are, you can make a correction. We can print up charts saying how much of a correction to make.
The trouble is that it’s still not quite right. The location of the magnetic north and south poles wanders. Fortunately we’ve got a fair idea of how quickly it’s moving, and in what direction. So if you have a rough idea how out of date your chart is, and what direction the poles were moving in, you can make a correction. We can communicate how much the variance between true north and magnetic north vary.
The trouble is that it’s still not quite right. The size of the variation depends on the season of the year. But all right; we should have a rough idea what season it is. We can correct for that. The size of the variation also depends on what time of day it is. Compasses point farther east at around 8 am (sun time) than they do the rest of the day, and farther west around 1 pm. At least they did when Alan Gurney’s Compass: A Story of Exploration and Innovation was published. I would be unsurprised if that’s changed since the book came out a dozen years ago. Still. These are all, we might say, global concerns. They’s based on where you are and when you look at the compass. But they don’t depend on you, the specific observer.
The trouble is that it’s still not quite right yet. Almost as soon as compasses were used for navigation, on ships, mariners noticed the compass could vary. And not just because compasses were often badly designed and badly made. The ships themselves got in the way. The problem started with guns, the iron of which led compasses astray. When it was just the ship’s guns the problem could be coped with. Set the compass binnacle far from any source of iron, and the error should be small enough.
The trouble is when the time comes to make ships with iron. There are great benefits you get from cladding ships in iron, or making them of iron altogether. Losing the benefits of navigation, though … that’s a bit much.
There’s an obvious answer. Suppose you know the construction of the ship throws off compass bearings. Then measure what the compass reads, at some point when you know what it should read. Use that to correct your measurements when you aren’t sure. From the early 1800s mariners could use a method called “swinging the ship”, setting the ship at known angles and comparing what the compass read. It’s a bit of a chore. And you should arrange things you need to do so that it’s harder to make a careless mistake at them.
In the 1850s John Gray of Liverpool patented a binnacle — the little pillar that holds the compass — which used the other obvious but brilliant approach. If the iron which builds the ship sends the compass awry, why not put iron near the compass to put the compass back where it should be? This set up a contraption of a binnacle surrounded by adjustable, correcting magnets.
Enter finally William Thomson, who would become Baron Kelvin in 1892. In 1871 the magazine Good Words asked him to write an article about the marine compass. In 1874 he published his first essay on the subject. The second part appeared five years after that. I am not certain that this is directly related to the tiny slice of story I tell. I just mention it to reassure every academic who’s falling behind on their paper-writing, which is all of them.
But come the 1880s Thomson patented an improved binnacle. Thomson had the sort of talents normally associated only with the heroes of some lovable yet dopey space-opera of the 1930s. He was a talented scientist, competent in thermodynamics and electricity and magnetism and fluid flow. He was a skilled mathematician, as you’d need to be to keep up with all that and along the way prove the Stokes theorem. (This is one of those incredibly useful theorems that gives information about the interior of a volume using only integrals over the surface.) He was a magnificent engineer, with a particular skill at developing instruments that would brilliantly measure delicate matters. He’s famous for saving the trans-Atlantic telegraph cable project. He recognized that what was needed was not more voltage to drive signal through three thousand miles of dubiously made copper wire, but rather ways to pick up the feeble signals that could come across, and amplify them into usability. And also described the forces at work on a ship that is laying a long line of submarine cable. And he was a manufacturer, able to turn these designs into mass-produced products. This through collaborating with James White, of Glasgow, for over half a century. And a businessman, able to convince people and organizations to use the things. He’s an implausible protagonist; and yet, there he is.
Thomson’s revision for the binnacle made it simpler. A pair of spheres, flanking the compass, and adjustable. The Royal Museums Greenwich web site offers a picture of this sort of system. It’s not so shiny as others in the collection. But this angle shows how adjustable the system would be. It’s a design that shows brilliance behind it. What work you might have to do to use it is obvious. At least it’s obvious once you’re told the spheres are adjustable. To reduce a massive, lingering, challenging problem to something easy is one of the great accomplishments of any practical mathematician.
This was not all Thomson did in maritime work. He’d developed an analog computer which would calculate the tides. Wikipedia tells me that Thomson claimed a similar mechanism could solve arbitrary differential equations. I’d accept that claim, if he made it. Thomson also developed better tools for sounding depths. And developed compasses proper, not just the correcting tools for binnacles. A maritime compass is a great practical challenge. It has to be able to move freely, so that it can give a correct direction even as the ship changes direction. But it can’t move too freely, or it becomes useless in rolling seas. It has to offer great precision, or it loses its use in directing long journeys. It has to be quick to read, or it won’t be consulted. Thomson designed a compass that was, my readings indicate, a great fit for all these constraints. By the time of his death in 1907 Kelvin and White (the company had various names) had made something like ten thousand compasses and binnacles.
And this from a person attached to all sorts of statistical mechanics stuff and who’s important for designing electrical circuits and the like.
It’s another week with several on-topic installments of Frazz. Again, Jef Mallet, you and I live in the same metro area. Wave to me at the farmer’s market or something. I’m kind of able to talk to people in real life, if I can keep in view three different paths to escape and know two bathrooms to hide in. Horrock’s is great for that.
Jef Mallet’s Frazz for the 22nd is a bit of wordplay. It’s built on the association between “negative” and “wrong”. And the confusing fact that multiplying a negative number by a negative number results in a positive number. It sounds like a trick. Still, negative numbers are tricky. The name connotes something that’s gone a bit wrong. It took time to understand what they were and how they should work. This weird multiplication rule follows from that. If we don’t suppose this to be true, then we break other ideas we have about multiplication and comparative sizes and such. Mathematicians needed to get comfortable with negative numbers. For a long time, for example, mathematicians would treat and as different kinds of polynomials to solve. Today we see a -4 as no harder than a +4, now that we’re good at multiplying it out. And I have read, but have not seen explained, that there was uncertainty among the philosophers of mathematics about whether we should consider negative numbers, as a group, to be greater than or less than positive numbers. (I have reasons for thinking this a mighty interesting speculation.) There’s reasons to doubt them, is what I have to say.
Bob Weber Jr and Jay Stephens’s Oh Brother for the 22nd reminds me of my childhood. At some point I was pairing up the counting numbers and the letters of the alphabet, and realized that the alphabet ended while the numbers did not. Something about that offended my young sense of justice. I’m not sure how, anymore. But that it was always possible to find a bigger number than whatever you thought was the biggest caught my imagination.
There is, surely, a largest finite number that anybody will ever use for something, even if it’s just hyperbole. I’m curious what it will be. Surely we can’t have already used it. A number named Skewes’s Number was famous, for a while, as the largest number actually used in a proof of something. The fame came from Isaac Asimov writing an essay about the number, and why someone might care, and how hard it is just describing how big the number is in a comprehensible way. Wikipedia tells me this number’s far been exceeded by, among other things, something called Rayo’s Number. It’s “the smallest number bigger than any finite number named by an expression in the language of set theory with a googol symbols or less” (plus some technical points to keep you from cheating). Which, all right, but I’d like to know if we think the first digit is a 1, maybe a 2? Somehow I don’t demand that of Skewes, perhaps because I read that Asimov essay when I was at an impressionable age.
Jef Mallet’s Frazz for the 23rd has Caulfield talk about a fraction divided by a fraction. And particularly he says “a fraction divided by a fraction is just a fraction times a flipped fraction”. This offends me, somehow. This even though that is how I’d calculate the value of the division, if I needed to know that. But it seems to me like automatically going to that process skips recognizing that, say, shouldn’t be surprising if it turns out not to be a fraction. Well, Caulfield’s just looking to cause trouble with a string of wordplay. I can think of how to divide a fraction by a fraction and get zero.
Ashleigh Brilliant’s Pot-Shots for the 23rd promises to recapitulate the whole history of mathematics in a single panel. Ambitious bit of work. It’s easy to picture going from the idea of 1 to any of the positive whole numbers, though. It’s so easy it doesn’t even need humans to do it; animals can count, at least a bit. We just carry on to a greater extent than the crows or the raccoons do, so far as we’ve heard. From those, it takes some squinting, but you can think of negative whole numbers. And from that you get zero pretty quickly. You can also get rational numbers. The western mathematical tradition did this by looking at … er … ratios, that something might be to another thing as two is to five. Circumlocutions like that. Getting to irrational numbers is harder. Can be harder. Some irrational numbers beg you to notice them: the square root of two, for example. Square root of three. Numbers that come up from solving polynomial equations. But there are more number than those. Many more numbers. You might suspect the existence of a transcendental number, that isn’t the root of any polynomial that’s decently behaved. But finding one? Or finding that there are more transcendental number than there are real numbers? This takes a certain brilliance to suspect, and to prove out. But we can get there with rational numbers — which we get to from collections of ones — and the idea of cutting sets of numbers into those smaller than and those bigger than something. Ashleigh Brilliant has more truth than, perhaps, he realized when he drew this panel.
Niklas Eriksson’s Carpe Diem for the 24th has goldfish work out the shape of space. A goldfish in this case has the advantage of being able to go nearly everywhere in the space. But working out what the universe must look like, when you can only run local experiments, is a great geometric problem. It’s akin to working out that the Earth must be a sphere, and about how big a sphere, from the surveying job one can do without travelling more than a few hundred kilometers.
For today’s entry, Iva Sallay, of Find The Factors, gave me an irresistible topic. I did not resist.
What’s purple and commutes?
An Abelian grape.
Whatever else you say about mathematics we are human. We tell jokes. I will tell some here. You may not understand the words in them. That’s all right. From the Abelian grape there, you gather this is some manner of wordplay. A pun, particularly. It’s built on a technical term. “Abelian groups” come from (not high school) Algebra. In an Abelian group, the group multiplication commutes. That is, if ‘a’ and ‘b’ are any things in the group, then their product “ab” is the same as “ba’. That is, the group works like ordinary addition on numbers does. We say “Abelian” in honor of Niels Henrik Abel, who taught us some fascinating stuff about polynomials. Puns are a common kind of humor. So common, they’re almost base. Even a good pun earns less laughter than groans.
But mathematicians make many puns. A typical page of mathematics jokes has a whole section of puns. “What’s yellow and equivalent to the Axiom of Choice? Zorn’s Lemon.” “What’s nonorientable and lives in the sea?” “Möbius Dick.” “One day Jesus said to his disciples, `The Kingdom of Heaven is like 3x2 + 8x – 9′. Thomas looked very confused and asked peter, `What does the teacher mean?’ Peter replied, `Don’t worry. It’s just another one of his parabolas’.” And there are many jokes built on how it is impossible to tell the difference between the sounds of “π” and “pie”.
It shouldn’t surprise that mathematicians make so many puns. Mathematics trains people to know definitions. To think about precisely what we mean. Puns ignore definitions. They build nonsense out of the ways that sounds interact. Mathematicians practice how to make things interact, even if they don’t know or care what the underlying things are. If you’ve gotten used to proving things about , without knowing what ‘a’ or ‘b’ are, it’s difficult to avoid turning “poles on the half-plane” (which matters in some mathematical physics) to a story about Polish people on an aircraft.
If there’s a flaw to this kind of humor it’s that these jokes may sound juvenile. One of the first things that strikes kids as funny is that a thing might have several meanings. Or might sound like another thing. “Why do mathematicians like parks? Because of all the natural logs!”
Jokes can be built tightly around definitions. “What do you get if you cross a mosquito with a mountain climber? Nothing; you can’t cross a vector with a scalar.” “There are 10 kinds of people in the world, those who understand binary mathematics and those who don’t.” “Life is complex; it has real and imaginary parts.”
There are more sophisticated jokes. Many of them are self-deprecating. “A mathematician is a device for turning coffee into theorems.” “An introvert mathematician looks at her shoes while talking to you. An extrovert mathematician looks at your shoes.” “A mathematics professor is someone who talks in someone else’s sleep”. “Two people are adrift in a hot air balloon. Finally they see someone and shout down, `Where are we?’ The person looks up, and studies them, watching the balloon drift away. Finally, when they are barely in shouting range, the person on the ground shouts back, `You are in a balloon!’ The first passenger curses their luck at running across a mathematician. `How do you know that was a mathematician?’ `Because her answer took a long time, was perfectly correct, and absolutely useless!”’ These have the form of being about mathematicians. But they’re not really. It would be the same joke to say “a poet is a device for turning coffee into couplets”, the sleep-talker anyone who teachers, or have the hot-air balloonists discover a lawyer or a consultant.
Some of these jokes get more specific, with mathematics harder to extract from the story. The tale of the nervous flyer who, before going to the conference, sends a postcard that she has a proof of the Riemann hypothesis. She arrives and admits she has no such thing, of course. But she sends that word ahead of every conference. She knows if she died in a plane crash after that, she’d be famous forever, and God would never give her that. (I wonder if Ian Randal Strock’s little joke of a story about Pierre de Fermat was an adaptation of this joke.) You could recast the joke for physicists uniting gravity and quantum mechanics. But I can’t imagine a way to make this joke about an ISO 9000 consultant.
A dairy farmer knew he could be milking his cows better. He could surely get more milk, and faster, if only the operations of his farm were arranged better. So he hired a mathematician to find the optimal way to configure everything. The mathematician toured every part of the pastures, the milking barn, the cows, everything relevant. And then the mathematician set to work devising a plan for the most efficient possible cow-milking operation. The mathematician declared, “First, assume a spherical cow.”
This joke is very mathematical. I know of no important results actually based on spherical cows. But the attitude that tries to make spheres of cows comes from observing mathematicians. To describe any real-world process is to make a model of that thing. A model is a simplification of the real thing. You suppose that things behave more predictably than the real thing. You trust the error made by this supposition is small enough for your needs. A cow is complicated, all those pointy ends and weird contours. A sphere is easy. And, besides, cows are funny. “Spherical cow” is a funny string of sounds, at least in English.
The spherical cows approach parodying the work mathematicians do. Many mathematical jokes are burlesques of deductive logic. Or not even burlesques. Charles Dodgson, known to humans as Lewis Carroll, wrote this in Symbolic Logic:
“No one, who means to go by the train and cannot get a conveyance, and has not enough time to walk to the station, can do without running;
This party of tourists mean to go by the train and cannot get a conveyance, but they have plenty of time to walk to the station.
∴ This party of tourists need not run.”
[ Here is another opportunity, gentle Reader, for playing a trick on your innocent friend. Put the proposed Syllogism before him, and ask him what he thinks of the Conclusion.
He will reply “Why, it’s perfectly correct, of course! And if your precious Logic-book tells you it isn’t, don’t believe it! You don’t mean to tell me those tourists need to run? If I were one of them, and knew the Premises to be true, I should be quite clear that I needn’t run — and I should walk!”
And you will reply “But suppose there was a mad bull behind you?”
And then your innocent friend will say “Hum! Ha! I must think that over a bit!” ]
The punch line is diffused by the text being so educational. And by being written in the 19th century, when it was bad form to excise any word from any writing. But you can recognize the joke, and why it should be a joke.
Not every mathematical-reasoning joke features some manner of cattle. Some are legitimate:
Claim. There are no uninteresting whole numbers.
Proof. Suppose there is a smalled uninteresting whole number. Call it N. That N is uninteresting is an interesting fact. Therefore N is not an uninteresting whole number.
Three mathematicians step up to the bar. The bartender asks, “you all want a beer?” The first mathematician says, “I don’t know.” The second mathematician says, “I don’t know.” The third says, “Yes”.
Some mock reasoning uses nonsense methods to get a true conclusion. It’s the fun of watching Mister Magoo walk unharmed through a construction site to find the department store exchange counter:
Venn Diagrams are not by themselves jokes (most of the time). But they are a great structure for jokes. And easy to draw, which is great for us who want to be funny but don’t feel sure about their drafting abilities.
And then there are personality jokes. Mathematics encourages people to think obsessively. Obsessive people are often funny people. Alexander Grothendieck was one of the candidates for “greatest 20th century mathematician”. His reputation is that he worked so well on abstract problems that he was incompetent at practical ones. The story goes that he was demonstrating something about prime numbers and his audience begged him to speak about a specific number, that they could follow an example. And that he grumbled a bit and, finally, said, “57”. It’s not a prime number. But if you speak of “Grothendieck’s prime”, many will recognize what you mean, and grin.
There are more outstanding, preposterous personalities. Paul Erdös was prolific, and a restless traveller. The stories go that he would show up at some poor mathematician’s door and stay with them several months. And then co-author a paper with the elevator operator. (Erdös is also credited as the originator of the “coffee into theorems” quip above.) John von Neumann was supposedly presented with this problem:
Two trains are on the same track, 60 miles apart, heading toward each other, each travelling 30 miles per hour. A fly travels 60 miles per hour, leaving one engine flying toward the other. When it reaches the other engine it turns around immediately and flies back to the other engine. This is repeated until the two trains crash. How far does the fly travel before the crash?
The first, hard way to do this is to realize how far the fly travels is a series. It starts at, let’s say, the left engine and flies to the right. Add to that the distance from the right to the left train now. Then left to the right again. Right to left. This is a bunch of calculations. Most people give up on that and realize the problem is easier. The trains will crash in one hour. The fly travels 60 miles per hour for an hour. It’ll fly 60 miles total. John von Neumann, say witnesses, had the answer instantly. He recognized the trick? “I summed the series.”
The personalities can be known more remotely, from a handful of facts about who they were or what they did. “Cantor did it diagonally.” Georg Cantor is famous for great thinking about infinitely large sets. His “diagonal proof” shows the set of real numbers must be larger than the set of rational numbers. “Fermat tried to do it in the margin but couldn’t fit it in.” “Galois did it on the night before.” (Évariste Galois wrote out important pieces of group theory the night before a duel. It went badly for him. French politics of the 1830s.) Every field has its celebrities. Mathematicians learn just enough about theirs to know a couple of jokes.
The jokes can attach to a generic mathematician personality. “How can you possibly visualize something that happens in a 12-dimensional space?” “Easy, first visualize it in an N-dimensional space, and then let N go to 12.” Three statisticians go hunting. They spot a deer. One shoots, missing it on the left. The second shoots, missing it on the right. The third leaps up, shouting, “We’ve hit it!” An engineer and a mathematician are sleeping in a hotel room when the fire alarm goes off. The engineer ties the bedsheets into a rope and shimmies out of the room. The mathematician looks at this, unties the bedsheets, sets them back on the bed, declares, “this is a problem already solved” and goes back to sleep. (Engineers and mathematicians pair up a lot in mathematics jokes. I assume in engineering jokes too, but that the engineers make wrong assumptions about who the joke is on. If there’s a third person on the party, she’s a physicist.)
Do I have a favorite mathematics joke? I suppose I must. There are jokes I like better than others, and there are — I assume — finitely many different mathematics jokes. So I must have a favorite. What is it? I don’t know. It must vary with the day and my mood and the last thing I thought about. I know a bit of doggerel keeps popping into my head, unbidden. Let me close by giving it to you.
Integral z-squared dz
From 1 to the cube root of 3
Times the cosine
Of three π over nine
Equals log of the cube root of e.
This may not strike you as very funny. I’m not sure it strikes me as very funny. But it keeps showing up, all the time. That has to add up.
Thaves’s Frank and Ernest for the 18th is a bit of wordplay. There’s something interesting culturally about phrasing “lots of math, but no chemistry”. Algorithms as mathematics makes sense. Much of mathematics is about finding processes to do interesting things. Algorithms, and the mathematics which justifies them, can at least in principle be justified with deductive logic. And we like to think that the universe must make deductive-logical sense. So it is easy to suppose that something mathematical simply must make logical sense.
Chemistry, though. It’s a metaphor for whatever the difference is between a thing’s roster of components and the effect of the whole. The suggestion is that it is mysterious and unpredictable. It’s an attitude strange to actual chemists, who have a rather good understanding of why most things happen. My suspicion is that this sense of chemistry is old, dating to before we had a good understanding of why chemical bonds work. We have that understanding thanks to quantum mechanics, and its mathematical representations.
But we can still allow for things that happen but aren’t obvious. When we write about “emergent properties” we describe things which are inherent in whatever we talk about. But they only appear when the things are a large enough mass, or interact long enough. Some things become significant only when they have enough chance to be seen.
Zach Weinersmith’s Saturday Morning Breakfast Cereal for the 18th is about mathematicians’ favorite Ancient Greek philosopher they haven’t actually read. (In fairness, Zeno is hard to read, even for those who know the language.) Zeno’s famous for four paradoxes, the most familiar of which is alluded to here. To travel across a space requires travelling across half of it first. But this applies recursively. To travel any distance requires accomplishing infinitely many partial-crossings. How can you do infinitely many things, each of which take more than zero time, in less than an infinitely great time? But we know we do this; so, what aren’t we understanding? A callow young mathematics major would answer: well, pick any tiny interval of time you like. All but a handful of the partial-crossings take less than your tiny interval time. This seems like a sufficient answer and reason to chuckle at philosophers. Fine; an instant has zero time elapse during it. Nothing must move during that instant, then. So when does movement happen, if there is no movement during all the moments of time? Reconciling these two points slows the mathematician down.
Patrick Roberts’s Todd the Dinosaur for the 19th mentions fractions. It’s only used to list a kind of mathematics problem a student might feign unconsciousness rather than do. And takes quite little space in the word balloon to describe. It’d be the same joke if Todd were asked to come up and give a ten-minute presentation on the Battle of Bunker Hill.
Julie Larson’s The Dinette Set for the 19th mentions the Rubik’s Cube. Sometime I should do a proper essay about its mathematics. Any Rubik’s Cube can be solved in at most 20 moves. And it’s apparently known there are some cube configurations that take at least 20 moves, so, that’s nice to have worked out. But there are many approaches to solving a cube, none of which I am competent to do. Some algorithms are, apparently, easier for people to learn, at the cost of taking more steps. And that’s fine. You should understand something before you try to do it efficiently.