10 November 2011
It's been a while since I wrote about anything here, so what I'm going to do now is to write about nothing. Not just any old nothing that might be lying around, but THE nothing that is at the bottom of everything! No, I haven't suddenly decided to switch to Zen Buddhism; this is a continuation of the whole numbers thing from last time. What I talked about last time was the evolution of numbers, and how we ended up with the place-value system with a zero representing the concept of “nothing” in a particular location. I had left out was the use of a zero by itself, to represent nothing, as a real value. Because it is the most celebrated of Indian contributions to mathematics in pop culture, I want to try and find out where it actually comes from.
(Note: when I say “zero” here without elaboration, I mean a zero in its own right, not a place-value zero)
As far as we can see, the zero was invented and reinvented several times by different civilizations, which then promptly forgot about it. They each either got stuck with the place-value zero (when they did invent a place-value system, that is) or couldn't accept the concept of something being nothing. Quite often, they seem to have stumbled onto the right ideas, and then decided that it wasn't really correct, or worth it, or didn't fit into their philosophy. Each civilization kept, coming close to the point, but then shying away from the point, because of this philosophical problem they had with it. However, each one came a bit closer to what was necessary.
As late as the 16th century, the Italian Renaissance mathematician Gerolamo Cardano, wrote on an entire system for solving cubic and quartic equations; equations of the form without using the zero (he did this by taking the 'd' and putting it on the other size). Doing it in this fashion, he had to use something like 15 different cases to solve what is really only one case with the zero in there. Yet, by the time of Newton, barely a couple of centuries later, it had become accepted whole-heartedly, and Sir Issac doesn't hesitate to use it in his equations. But the story of this shift isn't one of two centuries. In fact, it spans more than ten times that number, stretching from before the Common Era to the days of the Enlightenment.
The tale starts with the Babylonians. They had a place-value system, which naturally had a zero. They marked it in their numerical notations using something equal to a double-quote-like mark in their cuneiform tablets. But they never used this notation at the end of a number (so they never distinguished between 2, 20 and 200). The context was supposed to imply which power of ten was to be used. With this limitation in their fundamental system, they worked out some pretty important results. However, they had some astonishing limitations in their science; for example, they never figured out how to perform long division. Instead, they used a large table of reciprocals of numbers, approximated to fractions. For example, to calculate 1/13, they would do something like
While comes out to approximately 0.769…, is 0.777… Not particularly accurate, but easier to calculate with.
Similarly, they knew the solution to quadratic equations of the form x² + bx = c, only when c is positive. They couldn't abstract the same equation to the form ax² + bx + c = 0.
Obviously, this was because they had no way of expressing the right-hand-side 0 in the first place!
The Babylonians (and other middle-Eastern civilizations that followed them) often taught other civilizations their sciences, or traded their knowledge with other civilizations. So, when they traded with Greece and India, they taught their system to both the Greeks and the Indians, creating a three-way link between east and west. Soon enough, the focus shifted from Mesopotamia to Greece and India.
Take a look at a protractor; it always starts with Zero at the beginning of the scale, and goes on towards 180° or 360°. To mark an angle that doesn't vary from the reference line, you need to have a concept of 0°.
Ptolemy, the great mathematician/philosopher faced exactly this, when working on astronomy. So, abandoning the cumbersome Greek system, he picked the Babylonian base-60 system, and added a symbol, O, to represent a null value. This is probably the first known use of a circle to represent zero. But his use is grudging - he never allows the zero to become a real integer, and only uses it as a numerator in his fractions, which represent arc minutes and arc seconds in angles. Such angles are written as ”Degree part | Minutes Seconds” so 2° 3' 10” would be 2 | 3 10. So, an angle such as 10° would become ”10 | 0 0”, and 0° would be ” | 0 0” (i.e, he just put in a space to represent the integral zero, and calculated based on the minutes and seconds).
He had to allow it at least for zero degrees, because it would have been practically impossible to work without doing so, but even there, he just uses an empty space. In any case, such was the state of the art in Hellenic lands. He, and his successors, never really progressed beyond that point, either. They could work out some useful sine tables, but again, they couldn't express the other side of several equations, and had to resort to confusing circumlocutions to come to a solution for various forms. In such cases, a single solution escaped them.
In India, starting from about AD 200 or so, our mathematicians started working on the same problems. But their quick adoption of the zero proved to be their ace in the hole!
First came Aryabhata, a few centuries after Ptolemy. As I described in my last article, he was among the first to describe both the base-10 numeral system, and the zero within that system as a place-holder for ”nothing in the nth place”. But again, like Ptolemy, he doesn't give it a place by itself. It's only used in fractions and inside the place-value system.
But he was followed by Brahmagupta and the two Bhaskaras, I and II, who were three powerhouses of mathematics! The first Bhaskara and Brahmagupta were contemporaries, the former living in Maharashtra and the latter in Ujjain. Bhaskara was of Aryabhata's school, and was quite possibly his direct pupil (anyway, the dates match close enough to admit this possibility). Brahmagupta, on the other hand, seems to have been a total iconoclast, highly critical of Aryabhata. When he cites Aryabhata, it's almost always to say that he was wrong about this or that. Nevertheless, it's to Bhramagupta that we owe the introduction of the zero, and of negative integers to Aryabhata's system.
Brahmagupta gives us the first rules for using the zero. How to add a zero, how to subtract it, what happens when you multiply by a zero, or divide by it. While these may seem childish to us, having probably studied the rules in primary school, they that's because we're used to them, but someone had to write them down for the first time, and that someone was Brahmagupta of Bhinmal.
His rules for addition, subtraction and multiplication are the ones we understand easily enough:
”The sum of two positive quantities is positive, the sum of two negatives is a negative, the sum of a positive and negative is their difference, or if they're equal, zero. The sum of zero and the negative is negative, the sum of positive and zero is positive, and the sum of two zeros is zero”. ”The less has to be taken from the greater, positive from positive, negative from negative. If the greater is subtracted from the lesser, the difference has to be reversed (ie, positive made negative and negative made positive). Negative minus zero is (the same) negative, positive subtracted from zero is positive,and from zero yields zero.” ”The product of negative and positive is negative, the product of two negatives is positive, and that of two positives, positive. The product of zero and a negative, or zero, a positive and negative is zero, and two zeros is again zero.”
But, he makes a slight mistake in division. ”Positive divided by positive, or negative divided by negative is positive. Zero divided by zero is zero, Positive divided by negative or negative divided by positive is negative, and any number divided by zero is that number divided by zero”
Something doesn't square up with modern theories here - we don't define the division of zero by zero, which is one point where modern mathematics breaks down. But Brahmagupta just defines it as zero. Also, he decides to say that any number divided by zero is equal to n/0, which says very little.
So, here we have it - the first true zero, and it's by an Indian mathematician!