Lesson 1: Computer Communication

[Previous] [Next]

In this lesson, we examine the fundamentals of electronic communication and explore how computer communication differs from human communication.

After this lesson, you will be able to:

  • Understand how a computer transmits and receives information.
  • Explain the principles of computer language.
Estimated lesson time: 20 minutes

Early Forms of Communication

Humans communicate primarily through words, spoken and written. From ancient times until about 150 years ago, messages were either verbal or written in form. Getting a message to a distant recipient was often slow, and sometimes the message (or the messenger) got lost in the process.

As time and technology progressed, people developed devices to communicate faster over greater distances. Items such as lanterns, mirrors, and flags were used to send messages quickly over an extended visual range.

All "out of earshot" communications have one thing in common: they require some type of "code" to convert human language to a form of information that can be packaged and sent to the remote location. It might be a set of letters in an alphabet, a series of analog pulses over a telephone line, or a sequence of binary numbers in a computer. On the receiving end, this code needs to be converted back to language that people can understand.

Dots and Dashes, Bits and Bytes

Telegraphs and early radio communication used codes for transmissions. The most common, Morse code (named after its creator, Samuel F. B. Morse), is based on assigning a series of pulses to represent each letter of the alphabet. These pulses are sent over a wire in a series. The operator on the receiving end converts the code back into letters and words. Morse code remained in official use for messages at sea almost to the end of the twentieth century—it was officially retired in late 1999.

Morse used a code in which any single transmitted value had two possible states: either a dot or a dash. By combining the dots and dashes into groups, an operator was able to represent letters, and by stringing them together, words. That form of on-off notation can also be used to provide two numbers, 0 and 1. Zero represents no signal, or off; and one represents a signal, or on, state.

This type of number language is called binary notation because it uses only two digits, usually 0 and 1. It was first used by the ancient Chinese, who used the terms yin (empty) and yang (full) to build complex philosophical models of how the universe works.

Our computers are complex switch boxes that have two states and use a binary scheme as well. The value of a given switch's state—on or off—represents a value that can be used as a code. Modern computer technology uses terms other than yin and yang, but the same binary mathematics creates virtual worlds inside our modern machines.

The Binary Language of Computers

The binary math terms that follow are fundamental to understanding PC technology.

Bits

A bit is the smallest unit of information that is recognized by a computer: a single on/off event.

Bytes

A byte is a group of eight bits. A byte is required in order to represent one character of information. Pressing one key on a keyboard is equivalent to sending one byte of information to the CPU (the computer's central processing unit). A byte is the standard unit by which memory is measured in a computer—values are expressed in terms of kilobytes (KB) or megabytes (MB). The table that follows lists units of computer memory and their values.

Memory Unit Value
Bit Smallest unit of information, shorthand term for binary digit
Nibble 4 bits (Half of a byte)
Byte 8 bits (Equal to one character)
Word 16 bits on most personal computers (longer words possible on larger computers)
Kilobyte (KB) 1024 bytes
Megabyte (MB) 1,048,576 bytes (Approximately one million bytes or 1024 KB)
Gigabyte (GB) 1,073,741,824 bytes (Approximately one billion bytes or 1024 MB)

The Binary System

The binary system of numbers uses the base of 2 (0 and 1). As described earlier, a bit can exist in only two states, on or off. When bits are represented visually:

  • 0 (zero) equals off.
  • 1 (one) equals on.

The following is one byte of information in which all eight bits are set to zero. In the binary system, this sequence of eight zeros represents a single character—the number 0.

 0     0     0     0     0     0     0     0 

The binary system is one of several numerical systems that can be used for counting. It is similar to the decimal system, which we use to calculate everyday numbers and values. The prefix "dec" in the term "decimal system" comes from the Latin word for ten and denotes a base of ten. That is, the decimal system is based on the ten numbers zero through nine. The binary system has a base of two, the numbers zero and one.

Counting in Binary Notation

Every schoolchild learns to count using the decimal system. There, the rightmost whole number (the number to the left of the decimal point) is the "digits" column. Numbers written there have a value of zero to nine. The number to the left of the digits column (if present) is valued from ten to ninety—the "tens" column. Ten is the factor of each additional row in the decimal system of notation. To get the total value of a number, we add together all columns in both systems: 111 is the sum of 100+10+1.

NOTE
A factor is an item that is multiplied in a multiplication problem. For example, 2 and 3 are factors in the problem 2 × 3.

In our more common decimal notation, the values of numbers are founded on a base of ten, starting with the rightmost column. Any number in that position can have a value ranging from zero to nine. In the next column to the left, the values range from 10 to 99; and in the column to the left of that, values range from 100 to 999. Binary notation uses a system of right-to-left columns of ascending values, but in which each row has only two-instead of 10-possible numbers.

Under the binary system, the first row to the right can be only zero or one; the next row to the left can be two or three (if a number exists in that position). The columns that follow have values of four, then eight, then sixteen, and so on, each column doubling the possible value of the one to its right. Two is the factor used in the binary system, and—just like decimal—zero is a number counted in that tally. Examples of bytes of information (eight rows) follow.

Byte—Example A

The value of this byte is zero because all bits are off (0 = off).

 0     0     0     0     0     0     0     0     8     bits 128   64    32    16    8     4     2     1     #     values 

Byte—Example B

In this example, two of the bits are turned on (1 = on). The total value of this byte is determined by adding the values associated with the bit positions that are on. This byte represents the number 5 (4 + 1).

 0     0     0     0     0     1     0     1     8     bits 128   64    32    16    8     4     2     1     #     values 

Byte—Example C

In this example, two different bits are turned on to represent the number 9 (8 + 1).

 0     0     0     0     1     0     0     1     8     bits 128   64    32    16    8     4     2     1     #     values 

Those who are mathematically inclined will quickly realize that 256 is the largest number that can be represented by a single byte.

Because computers use binary numbers and humans use decimal numbers, A+ technicians must be able to perform simple conversions. The following table shows decimal numbers and their binary equivalents (0 to 9). You will need to know this information. The best way to prepare is to learn how to add in binary numbers, rather than merely memorizing the values.

Decimal Number Binary Equivalent
0 0000
1 0001
2 0010
3 0011
4 0100
5 0101
6 0110
7 0111
8 1000
9 1001

Numbers are fine for calculating, but today's computers must handle text, sound, streaming video, images, and animation as well. To handle all of that, standard codes are needed to translate between binary machine language and the type of data being represented and presented to the human user. The first common code-based language was developed to handle text characters.

Parallel and Serial Devices

The telegraph and the individual wires in our PCs are serial devices. This means that only one element of code can be sent at a time. Like a tunnel, there is only room for one person to pass through at one time. All electronic communications are—at some level—serial, because a single wire can have only two states: on or off.

To speed things up, we can add more wires. This allows simultaneous transmission of signals. Or, to continue our analogy, it's like adding another set of tunnels next to the first one; we still have only one person per tunnel, but we can get more people through because they are traveling in parallel. That is the difference between parallel and serial data transmission. In PC technology, we often string eight wires in a parallel set, allowing eight bits to be sent at once. This means that a single "send" can represent up to 256 numbers 28 = 256. That is the same number of values found in the ASCII code system (discussed in the next paragraph). Figure 2.1 illustrates serial and parallel communication.

Figure 2.1 Serial and parallel communication

ASCII Code

The standard code for handling text characters on most modern computers is called ASCII (American Standard Code for Information Interchange). The basic ASCII standard consists of 128 codes representing the English alphabet, punctuation, and certain control characters. Most systems today recognize 256 codes: the original 128, plus an additional 128 codes called the extended character set.

Remember that a byte represents one character of information; four bytes are needed to represent a string of four characters. The following four bytes represent the text string 12AB (using ASCII code):

 00110001     00110010     01000001     01000010 1            2            A            B 

The following illustrates how the binary language spells the word "binary":

 B            I            N            A            R            Y 01000010     01001001     01001110     01000001     01010010     01011001 

NOTE
It is very important to understand that in computer processing the "space" is a significant character. All items in a code must be set out for the machine to process. Like any other character, the space has a binary value that must be included in the data stream. In computing, the absence or presence of a space is critical and sometimes causes confusion or frustration among new users. Uppercase and lowercase letters also have different values. Some operating systems (for example, UNIX) distinguish between them for commands, while others (for example, MS-DOS) translate the uppercase and lowercase into the same word no matter how it is cased.

The following table is a complete representation of the ASCII character set. Even in present-day computing, laden with multimedia and sophisticated programming, ASCII retains an honored and important position.

Symbol Binary 1 Byte Decimal Symbol Binary 1 Byte Decimal
0 00110000 48 V 01010110 86
1 00110001 49 W 01010111 87
2 00110010 50 X 01011000 88
3 00110011 51 Y 01011001 89
4 00110100 52 Z 01011010 90
5 00110101 53 A 01100001 97
6 00110110 54 B 01100010 98
7 00110111 55 C 01100011 99
8 00111000 56 D 01100100 100
9 00111001 57 E 01100101 101
A 01000001 65 F 01100110 102
B 01000010 66 G 01100111 103
C 01000011 67 H 01101000 104
D 01000010 68 I 0110100 105
E 01000101 69 J 01101010 106
F 01000110 70 K 01101011 107
G 01000111 71 L 01101100 108
H 01001000 72 M 01101101 109
I 01001001 73 N 01101110 110
J 01001010 74 O 01101111 111
K 01001011 75 P 01110000 112
L 01001100 76 Q 01110001 113
M 01001101 77 R 01110010 114
N 01001110 78 S 01110011 115
O 01001111 79 T 01110100 116
P 01010000 80 U 01110101 117
Q 01010001 81 V 01110110 118
R 01010010 82 W 01110111 119
S 01010011 83 X 01111000 120
T 01010100 84 Y 01111001 121
U 01010101 85 Z 01111010 122

NOTE
All letters have a separate ASCII value for uppercase and lowercase. The capital letter "A" is 65, and the lowercase "a" is 97.

Keep in mind that computers are machines, and they do not really perceive numbers as anything other than electrical charges setting a switch on or off. Like binary numbers, electrical charges can exist in only two states—positive or negative. Computers interpret the presence of a charge as one and the absence of a charge as zero. This technology allows a computer to process information.

Lesson Summary

The following points summarize the main elements of this lesson:

  • Computers communicate using binary language.
  • A bit is the smallest unit of information that is recognized by a computer.
  • ASCII is the standard code that handles text characters for computers.


Microsoft Corporation - A+ Certification Training Kit
Microsoft Corporation - A+ Certification Training Kit
ISBN: N/A
EAN: N/A
Year: 2000
Pages: 127

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net