Getting to Know the Numbers by Their First Names
Now, if you have never used a slide rule, I will point out that in most cases you are performing a computation with four or more factors in the numerator, divided by four or more factors in the denominator. The "answer" the slide rule gives you is something like "123." The decimal point location and the exponent (the "times ten to the
power," for example) are up to you. This is a crucial point. To get a correct answer, you have to do two things right. First, you have to figure out where the decimal point goes in the final
result based on the eight or more factors. Second, you have to be able to figure out, using rules of scientific notation, what the exponent of the final answer is going to be based on the exponents of all the factors. It was absurdly easy in these calculations to be "off by one" in either the exponent or the placement of the decimal point. So for a correct answer of 1.23, it was almost always possible, through carelessness or misjudgment, to come up with an answer of 0.123 or 12.3. This is called in the profession "off by a factor of ten."
Consider this: A factor of ten is
. Apply it either way to your salary and see what I mean. Yet it is very, very easy, using a slide rule, to be off by a factor of ten. What this means is that for every calculation you did, you had to have some idea of what the answer should be
you did the calculation. You had to "know" that the answer should be around "one," so that if you "computed" that it was 0.123 or 12.3, you would know that you had made a mistake and go back and find it. This
that you had to estimate the result beforehand. It was a survival skill.
The people who never mastered it flunked out of engineering school, because it was just too easy to make a mistake. If you couldn't smell out a bad result, you were in big trouble.
Of course, this knowledge led to other interesting
were multistage, requiring that you plug an intermediate answer into another formula to get a secondary result, and then plug that in, and so on. So mistakes made early propagated, and if you got a completely absurd result at the end, it was a bear to backtrack all the way to the beginning. So we developed the habit of maniacally testing all of our intermediate results for reasonableness before going on. We became our own computational QC inspectors, not letting a computation proceed unless we were sure we were still in the ballpark. This was a great instinct to develop early in our careers.
Needless to say, as a by-product, we also got to be pretty good at computing things in our heads. None of us were equal to the legendary Richard Feynman
, but we all got to be pretty good. It was sometimes enough to make a
-arts student's head spin. For us, it was just another acquired survival trait.
By the way, we often were asked about performing computations to only three significant digitsthe "123" mentioned previously. Was that good enough? Well, it's one part in a thousand, roughly. Most experimental physical data is lucky to be within plus or minus 1 percent, or
ten times less accurate. So if you had three or four factors in the numerator and three or four factors in the denominator, each with at best 1 percent error, it was illusory to think that the calculated answer could be good to one part in a thousand. Ergo, slide rules could be used for most calculations with few problems.
Of course, when computers came in, it all changed. Then calculations were done by
iterating the equationsreplacing derivatives by finite differences, as it were. Then, because the result was obtained by cycling through thousands of steps, errors could accumulate insidiously. That's why computers have to have so much higher precision; you need lots of precision at each step to guard against
error. But in the end, the result can
be more accurate than the input data. Many people have either never
that or lost sight of it throughout the