Before going into details on internetworking, it's necessary to cover the basic concepts that explain how computer technology works. We'll do this "from the wire up" to help you understand why systems work the way they do.
The Internet's infrastructure is composed of millions of networking devices-routers, switches, firewalls, access servers, and hubs-loosely hooked together through a sophisticated global address system. They're linked mostly by twisted-pair copper cable to the desktop and big trunk lines running over very high-speed fiber-optic cable. But mostly the Internet is a matter of millions of individual hardware devices loosely tied together by a global addressing scheme.
Networking devices are more or less the same as normal computer platforms, such as your PC. The biggest differences are in configuration: Most types of network equipment have no monitors or disks because they're designed to move data-not store it or present it. However, all network devices are computers in the basic sense that they have CPUs, memory, and operating systems.
Computing is largely a matter of sending electrical signals between various hardware components. In a standard computer platform, the signals shoot around tiny transistors inside the CPU or memory and travel over ultra-thin wires embedded in printed circuit boards. Once on the outside, electrical signals travel over cables in order to move between devices.
As signals are passed over the cable, network interface cards (NICs) at each end keep track of the electrical pulse waveforms and interpret them as data. The NIC senses each electrical pulse as either an On or Off signal. This is called binary transmission-a system in which each On pulse is recorded as the number 1 and each Off signal as the number 0. In machine language, these zeros and ones are bits, and a file of bits is a binary file.
Whether a signal represents a zero or a one is sensed by fluctuations in the voltage of electrical pulses (or light pulses, over fiber-optic media) during miniscule time intervals. These tiny time intervals are called cycles per second, or Hertz (Hz) in electrical engineering circles. For example, the CPU in a 100-Mbps NIC can generate 100 million cycles per second. In practical terms, the payoff is that the computer can process 100 million pulses per second and interpret them as either zeros or ones.
All computers use binary transmission at the machine level. Bits are the basic raw material with which they work, usually as a collection of bits in a binary file. Binary files are the stuff that gets put into memory, processed through CPUs, stored on disks, and sent over cables. Both data and software programs are stored as binary files. If you were to look at a data file in any type of computer in binary format, you'd be staring at a page full of 0's and 1's. Doing so might make your eyes glaze over, but computers can handle binary format because of ordinality-a fancy term for knowing what piece of information is supposed to appear in a certain field position.
The computer doesn't keep track of ordinal positions one by one. It instead keeps track of the bit position at which a field begins and ends. A field is a logical piece of information. For example, the computer might know that bit positions 121 through 128 are used to store a person's middle initial.
Computers are able to track bit orders with great precision by using clocks that time exactly where a CPU is in a stream of bits. By knowing where fields are, computers can build data from the wire up.
For simplicity, however, computers don't operate one bit at a time. There's an interim level one step up from bits called bytes-thus the expression "bits and bytes." A byte is a series of eight consecutive bits that are operated upon as a unit.
From a logical standpoint, the basic unit making up a data field is bytes. This not only makes systems run faster, but also makes them easier to program and debug. You'll never see a programmer declare how many bits long a field should be, but declaring byte lengths is routine. Keeping track of individual bit positions is often left to the computer.
Unlike software, CPUs must deal in bits. At the lowest level, computer hardware deals with On/Off electrical signals pulsing through its circuitry in bits. It would take too long to perform bit-to-byte translations inside a CPU, so computers have what's called a word size. The step up from a byte is a word, which is the number of bytes a CPU architecture is designed to handle each cycle.
For example, an Intel Xeon-based PC or server is a 32-bit word machine, meaning that it processes 32 bits per clock cycle. But as hardware miniaturization techniques have advanced-and the need to process data faster has grown-the industry has settled on 64-bit word architectures as the way to go. A variety of 64-bit machines are available from IBM, Sun, and manufacturers using Intel's Itanium 2 architecture. Cisco devices use both 32and 64-bit CPUs.
The last step up is from bytes to something we humans can understand. As you probably know, software takes the form of source code files written by computer programmers.
The commands that programmers type into source code files are symbols instructing the computer what to do. When a program is written, it's changed into machine language-bits and bytes-by a compiler, which is a specialized application that translates software code into machine language files referred to as executables or binaries. For example, if code is written using the C++ programming language, it is translated through a C++ compiler.
You may have noticed the .exe and .bin file extensions in your PC's directory. They stand for executable and binary, respectively. The Cisco IOS (Internetwork Operating System) is executable software. Actually, it's a package containing hundreds of executables that operate device hardware, forward packets, talk-to-neighbor network devices, and so on.
A computing architecture is a technical specification of all components that make up a system. Published computing architectures are quite detailed and specific, and most are thousands of pages long. But in certain parts, they are abstract by design, leaving the exact implementation of that portion of the architecture up to the designer.
Look and feel
What separates an architecture from a regular product specification is the use of abstract layering. An abstraction layer is a fixed interface connecting two system components, and it governs the relationship between each side's function and implementation. If something changes on one side of the interface, by design, it should not require changes on the other side. These layers are put in to help guarantee compatibility in two directions:
Between various components within the system
Between various products implementing the architecture
Abstraction between components stabilizes system designs, because it allows different development groups to engineer against a stable target. For example, the published interface between the layers in networking software allows hundreds of network interface manufacturers to engineer products compatible with the Fast Ethernet specification.
There are several important computing architectures-some more open than others. The Microsoft Windows/Intel 80x86 Wintel architecture is the stuff of legend. Other important computing architectures include RAID (redundant array of inexpensive disks), Java, CORBA (Common Object Request Broker Architecture, a vendor-independent architecture and infrastructure that applications use to work together over networks), and dozens more. Yet perhaps the most important computing architecture ever devised is the one that created the Internet: the OSI reference model.