My Little 2021 Mathematics A-to-Z: Addition


John Golden, whom so far as I know doesn’t have an active blog, suggested this week’s topic. It pairs nicely with last week’s. I link to that in text, but if you would like to read all of this year’s Little Mathematics A to Z it should be at this link. And if you’d like to see all of my A-to-Z projects, pleas try this link. Thank you.

Addition

When I wrote about multiplication I came to the peculiar conclusion that it was the same as addition. This is true only in certain lights. When we study [abstract] algebra we look at things that look like arithmetic. The simplest useful thing that looks like arithmetic is a group. It has a set of elements, and a pairwise “group operation”. That group operation we call multiplication, if we don’t have a better name. We give it two elements and it gives us one. Under certain circumstances, this multiplication looks just like addition does.

But we have reason to think addition and multiplication aren’t the same. Where do we get addition?

We can make a meaningful addition by giving it something to interact with. By adding another operation. This turns the group into a ring. As it has two operations, it’s hard to resist calling one of them addition and the other multiplication. The new multiplication follows many of the rules the addition did. Adding two elements together gives you an element in the ring. So does multiplying. Addition is associative: a + (b + c) is the same thing as (a + b) + c . So it multiplication: a \times (b \times c) is the same thing as (a \times b) \times c .

And then the addition and the multiplication have to interact. If they didn’t, we’d just have a group with two operations. I don’t know anyone who’s found a good use for that. The way addition and multiplication interact we call distribution. This is represented by two rules, both of them depending on elements a, b, and c:

a\times(b + c) = a\times b + a\times c

(a + b)\times c = a\times c + b\times c

This is where we get something we have to call addition. It’s in having the two interacting group operations.

A problem which would have worried me at age eight: do we know we’re calling the correct operation “addition”? Yes, yes, names are arbitrary. But are we matching the thing we think we’re doing when we calculate 2 + 2 to addition and the thing for 2 x 2 to multiplication? How do we tell these two apart?

For all that they start the same, and resemble one another, there are differences. Addition has an identity, something that works like zero. a + 0 is always a , whatever a is. Multiplication … the multiplication we use every day has an identity, that is, 1. Are we required to have a multiplicative identity, something so that a \times 1 is always a ? That depends on what it said in the Introduction to Algebra textbook you learned on. If you want to be clear your rings do have a multiplicative identity you call it a “unit ring”. If you want to be clear you don’t care, I don’t know what to say. I’m told some people write that as “rng”, to hint that this identity is missing.

Addition always has an inverse. Whatever element a you pick, there is some -a so that -a + a is the additive identity. Multiplication? Even if we have a unit ring, there’s not always a reciprocal. The integers are a unit ring. But there are only two integers that have an integer multiplicative inverse, something you can multiply them by to get 1. If your unit ring does have a multiplicative inverse, this is called a division algebra. Rational numbers, for example, are a division algebra.

So for some rings, like the integers, there’s an obvious difference between addition and multiplication. But for the rational numbers? Can we tell the operations apart?

We can, through the additive identity, which please let me call 0. And the multiplicative identity, which please let me call 1. Is there a multiplicative inverse of 0? Suppose there is one; let me call it c , because I need some name. Then of all the things in the world, we know this:

0 \times c = 1

I can replace anything I like with something equal to it. So, for example, I can replace 0 with the sum of an element and its additive inverse. Like, (-a + a) for some element a . So then:

(-a + a) \times c = 1

And distribute this away!

-a\times c + a\times c = 1

I don’t know what number ac is, nor what its inverse -ac is. But I know its sum is zero. And so

0 = 1

This looks like trouble. But, all right, why not have the additive and the multiplicative identities be the same number? Mathematicians like to play with all kinds of weird things; why not this weirdness?

The why not is that you work out pretty fast that every element has to be equal to every other element. If you’re not sure how, consider the starting line of that little proof, but with an element b :

0 \times c \times b = 1 \times b

So there, finally, is a crack between addition and multiplication. Addition’s identity element, its zero, can’t have a multiplicative inverse. Multiplication’s identity element, its one, must have an additive inverse. We get addition from the thing we can’t un-multiply.

It may have struck you that if all we want is a ring with the lone element of 0 (or 1), then we can have addition and multiplication be indistinguishable again. And have the additive and multiplicative identities be the same thing. There’s nothing else for them to be. This is true, and we can. Unfortunately this ring doesn’t do much that’s interesting, except maybe prove some theorem we were working on isn’t always true. So we usually draw a box around it, acknowledge it once, and then exclude it from division algebras and fields and other things of interest. It’s much the same way we normally rule out 1 as a prime number. It’s an example that is too much bother to include given how unenlightening it is.

You can have groups and attach to them a multiplication and an addition and another binary operation. Those aren’t of such general interest that you study them much as an undergraduate.

And this is what we know of addition. It looks almost like a second multiplication. But it interacts just enough with multiplication to force the two to be distinguishable. From that we can create mathematics structures as interesting as arithmetic is.

The Difference Of Two Triangles


[ Trapezoid Week continues! ]

Yesterday I set out a diagram, showing off one example of a trapezoid, with which I mean to show one way to get the formula for a trapezoid’s area. The approach being used here is to find two triangles so that the difference in area between the two is the area of the trapezoid. This can often be a convenient way of finding the area of something: find simple shapes to work with so that the area we want is the sum or the difference of these easy areas. Later on I mean to do this area as the sum of simple shapes.

For now, though, I have the trapezoid set up so its area will be the difference of two triangle areas. The area of a triangle is a simple enough formula: it’s one-half the length of the base times the height. We’ll see much of that formula.

Continue reading “The Difference Of Two Triangles”

Eliminating Your Footprints


When last we discussed divisibility rules, particularly, rules for just adding up the digits in a number to tell what it might divide by, we had worked out rules for testing divisibility by eight. In that, we take the sum of four times the hundreds digit, plus two times the tens digit, plus the units digit, and if that sum is divisible by eight, then so was the original number. This hasn’t got the slick, smooth memorability of the rules for three and nine — just add all the numbers up — or the simplicity of checking for divisibility by ten, five, or two — just look at the last digit — but it’s not a complicated rule either.

Still, we came at it through an experimental method, fiddling around with possible rules until we found one which seemed to work. It seemed to work, and since we found out there are only a thousand possible cases to consider we can check that it works in every one of those cases. That’s tiresome to do, but functions, and it’s a legitimate way of forming mathematical rules. Quite a number of proofs amount to dividing a problem into several different cases and show that whatever we mean to prove is so in each ase.

Let’s see what we can do to tidy up the proof, though, and see if we can make it work without having to test out so many cases. We can, or I’d have been foolish to start this essay rather than another; along the way, though, we can remove the traces that show the experimenting that lead to the technique. We can put forth the cleaned-up reasoning and look all the more clever because it isn’t so obvious how we got there. This is another common property of proofs; the most attractive or elegant method of presenting them can leave the reader wondering how it was ever imagined.

Continue reading “Eliminating Your Footprints”

A Quick Impersonation Of Base Nine


I now resume the thread of spotting multiples of numbers easily. Thanks to the way positional notation lets us write out numbers as some multiple of our base, which is so nearly always ten it takes some effort to show where it’s not, it’s easy to spot whether a number is a multiple of that base, or some factor of the base, just by looking at the last digit. And if we’re interested in factors of some whole power of the base, of the ten squared which is a hundred, or the ten cubed which is a thousand, or so, we can find all we want to know just by looking at the last two or last three or last or-so digits.

Sadly, three and nine don’t go into ten, and never go into any power of ten either. Six and seven won’t either, although that exhausts the numbers below ten which don’t go into any power of ten. Of course, we also have the unpleasant point that eleven won’t go into a hundred or thousand or ten-thousand or more, and so won’t many other numbers we’d like.

If we didn’t have to use base ten, if we could use base nine, then we could get the benefits of instantly recognizing multiples of three or nine that we get for multiples of five or ten. If the digits of a number are some strand R finished off with an a, then the number written as Ra means the number gotten by multiplying nine by R and adding to that a. The whole strand will be divisible by nine whenever a is, which is to say when a is zero; and the whole strand will be divisible by three when a is, that is, when a is zero, three, or six.

Continue reading “A Quick Impersonation Of Base Nine”

How To Recognize Multiples Of Ten From Quite A Long Way Away


I got so caught up last week talking about the different possible bases that I forgot to the interesting thing I had wanted to talk about those bases. I suppose that will happen as long as I write to passion rather than plan. It gives me something to speak about today, at least.

Here is one thing implied by having a consistent base for all these numbers in which position is relevant: a one in each column represents the base-number of units of whatever the next column over represents. That is, in base ten, a one in the tens column represents ten units of one; a one in the thousands column represents ten units of one hundred. I mention this obvious point because it is so familiar and simple as to pass into invisibility. (It also extends past the decimal point; a one in the hundredths column is equivalent to ten units of a thousandth. But I want to talk about divisibility, in the whole numbers, and so leave fractions for some later time.)

This is tidy, in a way that we don’t see in variable bases. It will give us one tool for neat little divisibility rules. That tool appears just by writing things in the appropriate way, which is the best sort of tool. It saves on time trying to prove it works.

Continue reading “How To Recognize Multiples Of Ten From Quite A Long Way Away”

In Defense Of FOIL


I do sometimes read online forums of educators, particularly math educators, since it’s fun to have somewhere to talk shop, and the topics of conversation are constant enough you don’t have to spend much time getting the flavor of a particular group before participating. If you suppose the students are lazy, the administrators meddling, the community unsupportive, and the public irrationally terrified of mathematics you’ve covered most forum threads. I had no luck holding forth my view on one particular topic, though, so I’ll try fighting again here where I can easily squelch the opposition.

The argument, a subset of students-are-lazy (as they don’t wish to understand mathematics), was about a mnemonic technique called FOIL. It’s a tool to help people multiply binomials. Binomials are the sum (or difference) of two quantities, for example, (a + 2) or (b + 5). Here a and b are numbers whose value I don’t care about; I don’t care about the 2 or 5 either, but by picking specific values I avoid having too much abstraction in my paragraph. The product of (a + 2) with (b + 5) is the sum of all the pairs made by multiplying one term in the first binomial by one term in the second. There are four such pairs: a times b, and a times 5, and 2 times b, and 2 times 5. And therefore the product (a + 2) * (b + 5) will be a*b + a*5 + 2*b + 2*5. That would usually be cleaned up by writing 5*a instead of a*5, and by writing 10 instead of 2*5, so the sum would become a*b + 5*a + 2*b + 10.

FOIL is a way of making sure one has covered all the pairs. The letters stand for First, Outer, Inner, Last, and they mean: take the product of the First terms in each binomial, a and b; and those of the Outer terms, a and 5; and those of the Inner terms, 2 and b; and those of the Last terms, 2 and 5.

Here is my distinguished colleague’s objection to FOIL: Nobody needs it. This is true.

Continue reading “In Defense Of FOIL”