We often use the words precision and accuracy when we discuss floating-point computation. DEFINITION : The precision of a floating-point value is a measure of the number of significant digits it has in its significand. The more significant digits a floating-point value has, the more precise it is. | Double-precision double values are more precise than single-precision float values. The double literal 0.3333333333333333 is a more precise representation of the exact value than the float literal 0.3333333f . We mentioned in the previous section that having more significant digits in the significand allows us to represent more values, thus making the set of representable values more "finely grained." With higher precision values, we can have a representable value that is closer to an exact value than is possible with lesser precision values. We saw this in the previous paragraph 0.3333333333333333 is closer to than is 0.3333333f . DEFINITION : The accuracy of a floating-point value is its correctness, or closeness to a true exact value. | If the true value is exactly but the computed double value is 0.3444444444444444, then we can say that the double value is inaccurate, despite its precision, because most of its significant digits are wrong. On the other hand, the float literal 0.3333333f is more accurate because its value is closer to the true value. Computational errors, such as roundoff and cancellation, affect a floating-point value's accuracy. Increasing the precision from single to double gives us the potential for increased accuracy, but it doesn't guarantee it. |