Fundamental Numeric Types

The basic numeric types in C# have keywords associated with them. These types include integer types, floating-point types, and a decimal type to store large numbers with a high degree of accuracy.

Integer Types

There are eight C# integer types. This variety allows you to select a data type large enough to hold its intended range of values without wasting resources. Table 2.1 lists each integer type.

Table 2.1. Integer Types [1]



Range (Inclusive)

BCL Name



8 bits

128 and 127




8 bits

0 and 255




16 bits

32,768 and 32,767




16 bits

0 and 65,535




32 bits

2,147,483,648 and 2,147,483,647




32 bits

0 and 4,294,967,295




64 bits

9,223,372,036,854,775,808 and 9,223,372,036,854,775,807




64 bits

0 and 18,446,744,073,709,551,615



[1] There was significant discussion among language designers and CLI designers about which types should be in the CLS. Ultimately, the decision was made to support only one type, signed or unsigned, per length. The C# designers insisted that although signed types for all lengths were acceptable in general, the byte type was an exception because unsigned bytes were more useful and common. In fact, it was argued, signed bytes could potentially cause programming problems. In the end, the C# team's perspective won out and the unsigned byte was included in the CLS instead of the signed byte.

Included in Table 2.1 (and in Tables 2.2 and 2.3) is a column for the full name of each type. All the fundamental types in C# have a short name and a full name. The full name corresponds to the type as it is named in the Base Class Library (BCL). This name is the same across all languages and it uniquely identifies the type within an assembly. Because of the fundamental nature of primitive types, C# also supplies keywords as short names or abbreviations to the full names of fundamental types. From the compiler's perspective, both names are exactly the same, producing exactly the same code. In fact, an examination of the resulting CIL code would provide no indication of which name was used.

Language Contrast: C++short Data Type

In C/C++, the short data type is an abbreviation for short int. In C#, short on its own is the actual data type.

Floating-Point Types (float, double)

Floating-point numbers have varying degrees of precision. If you were to read the value of a floating-point number to be 0.1, it could very easily be 0.099999999999999999 or 0.1000000000000000001 or some other number very close to 0.1. Alternatively, a large number such as Avagadro's number, 6.02E23, could be off by 9.9E9, which is something also exceptionally close to 6.02E23, considering its size. By definition, the accuracy of a floating-point number is in proportion to the size of the number it contains. Accuracy, therefore, is determined by the number of significant digits, not by a fixed value such as ±0.01.

C# supports the two floating-point number types listed in Table 2.2.

Binary numbers appear as base 10 (denary) numbers for human readability. The number of bits (binary digits) converts to 15 decimal digits, with a remainder that contributes to a sixteenth decimal digit as expressed in Table 2.2. Specifically, numbers between 1.7 * 10307 and less than 1 * 10308 have only 15 significant digits. However, numbers ranging from 1 * 10308 to 1.7 * 10308 will have 16 significant digits. A similar range of significant digits occurs with the decimal type as well.

Table 2.2. Floating-Point Types



Range (Inclusive)

BCL Name

Significant Digits


32 bits

±1.5 x 1045 to ±3.4 x 1038




64 bits

±5.0 x 10324 to ±1.7 x 10308



Decimal Type

C# contains a numeric type with 128-bit precision (see Table 2.3). This is suitable for large and precise calculations, frequently financial calculations.

Table 2.3. decimal Type



Range (Inclusive)

BCL Name

Significant Digits


128 bits

1.0 x 1028 to approximately 7.9 x 1028



Unlike floating-point numbers, the decimal type maintains exact precision for all denary numbers within its range. With the decimal type, therefore, a value of 0.1 is exactly 0.1. However, while the decimal type has greater precision than the floating-point types, it has a smaller range. Thus, conversions from floating-point types to the decimal type may result in overflow errors. Also, calculations with decimal are slightly slower.

Advanced Topic: Floating-Point Types and Decimals Dissected

Unless they are out of range, decimal numbers represent denary numbers exactly. In contrast, the floating-point representation of denary numbers introduces a possible rounding error. The difference between the decimal type and the C# floating-point types is that the exponent of a decimal type is a denary and the exponent of floating-point types is binary.

The exponent of a decimal is ±N * 10k where

  • N is a positive integer represented by 96 bits.

  • k is given by -28 <= k <= 0.

In contrast, a float is any number ±N * 2k where

  • N is a positive integer represented by a fixed number of bits (24 for float and 53 for double).

  • k is any integer ranging from -149 to +104 for float and -1075 to +970 for double.

Literal Values

A literal value is a representation of a constant value within source code. For example, if you want to have System.Console.WriteLine() print out the integer value 42 and the double value 1.618034 (Phi), you could use the code shown in Listing 2.1.

Listing 2.1. Specifying Literal Values

System.Console.WriteLine(42); System.Console.WriteLine(1.618034);

Output 2.1 shows the results of Listing 2.1.

Output 2.1.

42 1.618034

Beginner Topic: Use Caution When Hardcoding Values

The practice of placing a value directly into source code is called hardcoding, because changing the values means recompiling the code. Developers must carefully consider the choice between hardcoding values within their code and retrieving them from an external source, such as a configuration file, so that the values are modifiable without recompiling.

By default, when you specify a literal number with a decimal point, the compiler interprets it as a double type. Conversely, an integer value (with no decimal point) generally defaults to an int, assuming the value is not too large to be stored in an integer. If the value is too large, then the compiler will interpret it as a long. Furthermore, the C# compiler allows assignment to a numeric type other than an int, assuming the literal value is appropriate for the target data type. For example, short s = 42 and byte b = 77 are allowed. However, this is appropriate only for literal values; b = s is not appropriate without additional syntax, as discussed in the section Conversions between Data Types, later in this chapter.

As previously discussed in the section Fundamental Numeric Types, there are many different numeric types in C#. In Listing 2.2, a literal value is placed within C# code. Since numbers with a decimal point will default to the double data type, the output, shown in Output 2.2, is 1.61803398874989 (the last digit, 5, is missing), corresponding to the expected accuracy of a double.

Listing 2.2. Specifying a Literal double


Output 2.2.


To view the intended number with its full accuracy, you must declare explicitly the literal value as a decimal type by appending an m (or M) (see Listing 2.3 and Output 2.3).

Listing 2.3. Specifying a Literal decimal


Output 2.3.


Now the output of Listing 2.3 is as expected: 1.618033988749895. Note that d is for double. The m used to identify a decimal corresponds to its frequent use in monetary calculations.

You can also add a suffix to explicitly declare a literal as float or double by using the f and d suffixes, respectively. For integer data types, the suffixes are u, l, lu, and ul. The type of an integer literal can be determined as follows.

  • Numeric literals with no suffix resolve to the first data type that can store the value in this order: int, uint, long, and ulong.

  • Numeric literals with the suffix u resolve to the first data type that can store the value in the order uint and then ulong.

  • Numeric literals with the suffix l resolve to the first data type that can store the value in the order long and then ulong.

  • If the numeric literal has the suffix ul or lu, it is of type ulong.

Note that suffixes for literals are case insensitive. However, for long, uppercase is generally preferred because of the similarity between the lowercase letter l and the digit 1.

In some situations, you may wish to use exponential notation instead of writing out several zeroes before or after the decimal point. To use exponential notation, supply the e or E infix, follow the infix character with a positive or negative integer number, and complete the literal with the appropriate data type suffix. For example, you could print out Avagadro's number as a float, as shown in Listing 2.4 and Output 2.4.

Listing 2.4. Exponential Notation


Output 2.4.


Beginner Topic: Hexadecimal Notation

Usually you work with numbers that are represented with a base of 10, meaning there are 10 symbols (09) for each digit in the number. If a number is displayed with hexadecimal notation, then it is displayed with a base of 16 numbers, meaning 16 symbols are used: 09, AF (lowercase can also be used). Therefore, 0x000A corresponds to the decimal value 10 and 0x002A corresponds to the decimal value 42. The actual number is the same. Switching from hexadecimal to decimal or vice versa does not change the number itself, just the representation of the number.

Each hex digit is four bits, so a byte can represent two hex digits.

In all discussions of literal numeric values so far, I have covered only decimal type values. C# also supports the ability to specify hexadecimal values. To specify a hexadecimal value, prefix the value with 0x and then use any hexadecimal digit, as shown in Listing 2.5.

Listing 2.5. Hexadecimal Literal Value

// Display the value 42 using a hexadecimal literal. System.Console.WriteLine(0x002A);

Output 2.5 shows the results of Listing 2.5.

Output 2.5.


Note that this code still displays 42, not 0x002A.

Advanced Topic: Formatting Numbers as Hexadecimal

To display a numeric value in its hexadecimal format, it is necessary to use the x or X numeric formatting specifier. The casing determines whether the hexadecimal letters appear in lower- or uppercase. Listing 2.6 shows an example of how to do this.

Listing 2.6. Example of a Hexadecimal Format Specifier

// Displays "0x2A" System.Console.WriteLine("0x{0:X}", 42);

Output 2.6 shows the results.

Output 2.6.


Note that the numeric literal (42) can be in decimal or hexadecimal form. The result will be the same.

Advanced Topic: Round-Trip Formatting

By default, System.Console.WriteLine(1.618033988749895); displays 1.61803398874989, with the last digit missing. To more accurately identify the string representation of the double value it is possible to convert it using a format string and the round-trip format specifier, R (or r). string.Format("{0:R}", 1.618033988749895), for example, will return the result 1.6180339887498949.

The round-trip format specifier returns a string that, if converted back into a numeric value, will always result in the original value. Listing 2.7 demonstrates.

Listing 2.7. Formatting Using the R Format Specifier

// ... const double number = 1.618033988749895; double result; string text; text = string.Format("{0}", number); result = double.Parse(text); System.Console.WriteLine("{0}: result != number",   result != number); text = string.Format("{0:R}", number); result = double.Parse(text); System.Console.WriteLine("{0}: result == number",   result == number); // ...

Output 2.7 shows the resulting output.

Output 2.7.

True: result != number True: result == number

When assigning text the first time, there is no round-trip format specifier and, as a result, the value returned by double.Parse(text) is not the same as the original number value. In contrast, when the round-trip format specifier is used, double.Parse(text) returns the original value.

Essential C# 2.0
Essential C# 2.0
ISBN: 0321150775
EAN: 2147483647
Year: 2007
Pages: 185 © 2008-2017.
If you may any questions please contact us: