Java Number Cruncher. The Java Programmer's Guide to Numerical Computing Authors: Mak R. Published year: 2001 Pages: 12-13/141

 Java Number Cruncher: The Java Programmer's Guide to Numerical Computing By Ronald  Mak Table of Contents Chapter  1.   Floating-Point Numbers Are Not Real!

### 1.4 Precision and Accuracy

We often use the words precision and accuracy when we discuss floating-point computation.

 DEFINITION : The precision of a floating-point value is a measure of the number of significant digits it has in its significand. The more significant digits a floating-point value has, the more precise it is.

Double-precision double values are more precise than single-precision float values. The double literal 0.3333333333333333 is a more precise representation of the exact value than the float literal 0.3333333f .

We mentioned in the previous section that having more significant digits in the significand allows us to represent more values, thus making the set of representable values more "finely grained." With higher precision values, we can have a representable value that is closer to an exact value than is possible with lesser precision values. We saw this in the previous paragraph 0.3333333333333333 is closer to than is 0.3333333f .

 DEFINITION : The accuracy of a floating-point value is its correctness, or closeness to a true exact value.

If the true value is exactly but the computed double value is 0.3444444444444444, then we can say that the double value is inaccurate, despite its precision, because most of its significant digits are wrong. On the other hand, the float literal 0.3333333f is more accurate because its value is closer to the true value. Computational errors, such as roundoff and cancellation, affect a floating-point value's accuracy. Increasing the precision from single to double gives us the potential for increased accuracy, but it doesn't guarantee it.

 Top

 Java Number Cruncher: The Java Programmer's Guide to Numerical Computing By Ronald  Mak Table of Contents Chapter  1.   Floating-Point Numbers Are Not Real!

1.5 Disobeying the Laws of Algebra

The real numbers of pure math always obey the laws of algebra. For example, the associative law of addition states that, for any real numbers a, b, and c ,

Similarly, the associative law of multiplication states that

So how do the floating-point numbers behave? Program 1-3 indicates that floating-point addition and multiplication are not always associative. See Listing 1-3.

##### Listing 1-3 Failures of the associative laws for floating-point addition and multiplication.
```package numbercruncher.program1_3;

/**
* PROGRAM 1-3: Not Associative
*
* Demonstrate that floating-point addition and multiplication
* are not associative operations.
*/
public class NotAssociative
{
public static void main(String args[])
{

float a = 1;
float b = 3.0E-8f;
float c = 4.0E-8f;

System.out.println("a = " + a);
System.out.println("b = " + b);
System.out.println("c = " + c);

float s1 = (a+b)+c;
float s2 = a+(b+c);

System.out.println();
System.out.println("(a+b)+c = " + s1);
System.out.println("a+(b+c) = " + s2);

float d  = 0.54385f;
float e  = 0.9599806f;
float f  = 0.2252711f;
System.out.println();
System.out.println("d = " + d);
System.out.println("e = " + e);
System.out.println("f = " + f);

s1 = (d+e)+f;
s2 = d+(e+f);

System.out.println();
System.out.println("(d+e)+f = " + s1);
System.out.println("d+(e+f) = " + s2);

// Multipication: Underflow

float u = 0.5f;
float v = 1.0E-45f;
float w = 3.0E38f;

System.out.println();
System.out.println("u = " + u);
System.out.println("v = " + v);
System.out.println("w = " + w);

float p1 = (u*v)*w;
float p2 = u*(v*w);

System.out.println();
System.out.println("(u*v)*w = " + p1);
System.out.println("u*(v*w) = " + p2);

// Multiplication: Roundoff error

float x = 0.9091322f;
float y = 0.8606576f;
float z = 0.5684686f;

System.out.println();
System.out.println("x = " + x);
System.out.println("y = " + y);
System.out.println("z = " + z);

p1 = (x*y)*z;
p2 = x*(y*z);

System.out.println();
System.out.println("(x*y)*z = " + p1);
System.out.println("x*(y*z) = " + p2);
}
}
```

Output:

```a = 1.0
b = 3.0E-8
c = 4.0E-8

(a+b)+c = 1.0
a+(b+c) = 1.0000001

d = 0.54385
e = 0.9599806
f = 0.2252711

(d+e)+f = 1.7291018
d+(e+f) = 1.7291017

u = 0.5
v = 1.4E-45
w = 3.0E38

(u*v)*w = 0.0
u*(v*w) = 2.1019477E-7

x = 0.9091322
y = 0.8606576
z = 0.5684686

(x*y)*z = 0.4447991
x*(y*z) = 0.44479907
```

We can analyze Program 1-3's output. In the first addition example, the exact value of (a + b) is 1.00000003, but single-precision float has insufficient precision to represent that sum, and so it is rounded down to 1. Adding the value of c to that sum also results in the rounded down value of 1 for the same reason. However, the exact value of (b + c) is 0.00000007, and 1 plus that sum is exactly 1.00000007, which, to be a float value, is rounded up to 1.0000001.

In the second addition example, the exact sum of (d + e) + f and d + (e + f) falls between two representable float values. Adding one way rounds up, and adding the other way rounds down.

In the first multiplication example, (u*v) underflows because the value of v is so small, causing the product to be set to 0, and so (u*v)*w is also 0. However the product of (v*w) is a valid float value greater than 0, and then so is u*(v*w) .

The second multiplication example is similar to the second addition example. The product of (x*y)*z and x*(y*z) falls between two representable float values. Multiplying one way rounds up, and multiplying the other way rounds down.

How often do the floating-point addition and multiplication operations fail their associative laws? Do the failures occur so rarely that we can safely ignore them? Program 1-4 repeatedly generates sets of three random floating-point values to test addition and multiplication. See Listing 1-4.

##### Listing 1-4 The percentage of failures of the associative laws for floating-point addition and multiplication.
```package numbercruncher.program1_4;

import java.util.Random;

/**
* PROGRAM 1-4: Not Associative Percentage
*
* Figure out what percentage of floating-point additions
* and multiplications fail their associative laws.
*/
public class NotAssocPercentage
{
private static final int TRIALS = 1000000;  // one million

public static void main(String args[])
{
Random random = new Random();

int multCount = 0;

// Loop to perform trials.
for (int i = 0; i < TRIALS; ++i) {

// Generate three random floating-point values.
float a = random.nextFloat();
float b = random.nextFloat();
float c = random.nextFloat();

float s1 = (a+b)+c;
float s2 = a+(b+c);

// Multiply both ways.
float p1 = (a*b)*c;
float p2 = a*(b*c);

// Compare sums and products and count the failures.
if (p1 != p2) ++multCount;

}
System.out.println((100*addCount)/TRIALS + "% failures of the " +
System.out.println((100*multCount)/TRIALS + "% failures of the " +
"associative law of multiplication.");
}
}
```

Output:

```17% failures of the associative law of addition.
34% failures of the associative law of multiplication.
```

Well, it's probably not a good idea to ignore the failures!

Floating-point arithmetic also fails the distributive law, which states that, for any real values a, b, and c,

and the cancellation law (not to be confused with the cancellation error, defined earlier), which states that, for any real values a, b, and c,

Program 1-5, shown in Listing 1-5, demonstrates that we cannot count on the multiplicative inverse law, which states that, for any real value a,

##### Listing 1-5 The percentage failures of the inverse law for floating-point multiplication.
```package numbercruncher.program1_5;

import java.util.Random;

/**
* PROGRAM 1-5: No Multiplicative Inverse
*
* Figure out what percentage of floating-point
* multiplicative inverses fail.
*/
public class NoMultInverse
{
private static final int TRIALS = 1000000;  // one million

public static void main(String args[])
{
Random random = new Random();
int failCount = 0;

// Loop to perform trials.
for (int i = 0; i < TRIALS; ++i) {

// Generate a random floating-point value.
float a = random.nextFloat();

// Multiply both ways.

float p1 = a*(1f/a);
float p2 = (1f/a)*a;

// Compare products and count the failures.
if ((p1 != 1)  (p2 != 1)) ++failCount;
}

System.out.println((100*failCount)/TRIALS + "% failures of the " +
"multiplicative inverse law.");
}
}
```

Output:

```15% failures of the multiplicative inverse law.
```

If it's any consolation, floating-point addition and multiplication do obey the commutative laws, which state that, for any real values a and b,

and

Whew!

 Top

 Java Number Cruncher. The Java Programmer's Guide to Numerical Computing Authors: Mak R. Published year: 2001 Pages: 12-13/141