# 1.3 Real Numbers versus Floating-Point Numbers

 Java Number Cruncher: The Java Programmer's Guide to Numerical Computing By Ronald  Mak Table of Contents Chapter  1.   Floating-Point Numbers Are Not Real!

1.3 Real Numbers versus Floating-Point Numbers

Roundoff errors do not occur with pure mathematics. They are a by-product of computation, whether by hand or by computer. Since we're working with computers, let's see what some of the differences are between pure math and computer math in order to understand why roundoff errors happen.

In pure math, the real numbers are infinite. There is neither a smallest number nor a largest number. Between any two numbers, no matter how close together their values are, there exists yet another number. In other words, the real numbers are continuous?athere are no gaps. Our intuition also tells us that the numbers are all "evenly distributed" along the real number line. Each number is also infinitely precise?athe digits after the decimal point go on and on, whether it's a single repeating digit (including 0), a repeating group of digits, or (in the case of an irrational number) just more and more digits without any apparent pattern.

Now let's consider the numbers of computer math?aspecifically, Java's floating-point numbers. Chapter 3 examines these numbers in detail, but we already know enough to point out some of their major differences with the real numbers.

First and foremost, the floating-point numbers are not infinite. There is a smallest one, and there is a largest one. There is only a fixed number of floating-point numbers, and so there are gaps between them. Single-precision floating-point numbers (type float ) have about eight significant decimal digits, and double-precision floating-point numbers (type double ) have about 17 significant decimal digits. We have to use the word about because, internally, the numbers are represented in base 2, not base 10, and the conversion between the internal binary form and the external decimal text form introduces some fuzziness .

How are the floating-point numbers distributed along the number line? Again, Chapter 3 goes into this with much greater detail, but it suffices to say for now that a floating-point number is stored internally in two parts , a significand [1] and an exponent, similar to scientific notation. Each part can be positive or negative. So, the number 0.012345 is stored as 1.2345 and §C2, which together represent 1.2345x10 - 2 . (We'll stick to base 10 for these examples.) Unless the number is 0, the significand always has a single, nonzero digit to the left of the decimal point.

[1] Called the mantissa in older textbooks .

To answer the distribution question, let's simplify matters by assuming that the significand is limited to a single decimal digit and that the exponent is restricted to only the values -1, 0, and +1. Table 1-1 lists all the positive values we can represent.

In Table 1-1, note that most of the 0 row is empty. The special case is when the significand and the exponent are both 0?athe value is 0 itself. A similar table with negative significand values along the left would list all of the negative values we can represent.

##### Table 1-1. All of the positive values (and 0) that we can represent when we limit the significand to a single decimal digit and we restrict the exponent to only the values -1, 0, and 1. The positive significand values are on the left at the head of each row, and the exponent values are on top at the head of each column. The representable values are in bold within the table.

- 1

+ 1

1

1x10 - 1 = 0.1

1x10 = 1

1x10 1 = 10

2

2x10 - 1 = 0.2

2x10 = 2

2x10 1 = 20

3

3x10 - 1 = 0.3

3x10 = 3

3x10 1 = 30

4

4x10 - 1 = 0.4

4x10 = 4

4x10 1 = 40

5

5x10 - 1 = 0.5

5x10 = 5

5x10 1 = 50

6

6x10 - 1 = 0.6

6x10 = 6

6x10 1 = 60

7

7x10 - 1 = 0.7

7x10 = 7

7x10 1 = 70

8

8x10 - 1 = 0.8

8x10 = 8

8x10 1 = 80

9

9x10 - 1 = 0.9

9x10 = 9

9x10 1 = 90

Figure 1-1 plots the represented positive values on the number line. In (a), we see the numbers from 0 through 1; in (b), we see the numbers from 0 through 10; in (c), we see the numbers from 0 through 90. Thus, it is apparent that the numbers are not evenly distributed?athe gaps between numbers widen by a factor of 10 with each increase in the exponent value. There are infinitely many numbers we cannot represent, such as 0.25 or 48.

##### Figure 1-1. The representable floating-point numbers plotted on the number line. (a) From 0 through 1, (b) from 0 through 10, and (c) from 0 through 90.

Of course, we would be able to represent more numbers if we allowed more digits in the significand, and their range would be wider if we allowed more exponent values. But the key facts remain ?athere will be only a finite number of representable numbers, and they will not be evenly distributed.

So let's return to our original question of where roundoff errors come from. Whenever a computed value lies between two representable values (and, more likely, it will not land right on a representable value), the computed value is replaced by the nearest representable value.The resulting roundoff error is the difference between the computed value and the representable value.

 DEFINITION : Roundoff error is the difference between an exact value and its representable value.

 Top