1.1 Evolution of computer systems architectures

 < Free Open Study > 



1.1 Evolution of computer systems architectures

Computers came into being with the development of the ENIAC computer system in the late 1940s. The early ENIAC and subsequent computers were constructed of vacuum tubes and filled a large room. These early computer systems were dedicated to a single task and had no operating system. The power of these early computers was less than that of the handheld calculators in use today. These computers were used mainly for ballistic missile trajectory projections and military research. The architecture of these early computers was based on the von Neumann stored program, single-stream instruction flow architecture (Figure 1.1). This basic architecture and philosophy is still in use today in most computer systems.

click to expand
Figure 1.1: Basic computer system.

These early computer systems had no sophisticated operating systems, databases, networks, or high-level programming languages to simplify their operations. They stored program instructions and data needed for computation in the same place. Instructions were read from memory one at a time and were mostly associated with the loading and storage of program data from memory to registers where the data were to be operated on. Data in these early systems were not shared by programs. If a program needed data produced by another program, these data items were typically copied into a region near the end of a program's space, and the end addresses were hard-coded for use by the application program in which they were embedded.

A user application resides on a computer system. The computer system provides the physical medium on which the data and programs are stored and the processing capacity to manipulate the stored data. A processing unit of a computer system consists of five main elements: the memory, an arithmetic logic unit, an input unit, an output unit, and a control element. The memory unit stores both the data for programs and the instructions of a program that manipulates stored data.

The program's individual elements or instructions are fetched from the memory one at a time and are interpreted by the control unit. The control unit, depending on the interpretation of the instruction, determines what computer operation to perform next. If the instruction requires no additional data, the control indicates to the arithmetic logic unit what operation to perform and with what registers. (See Figure 1.1.)

If the instruction requires additional data, the control unit passes the appropriate command to the memory (MAR, memory address register) to fetch a data item from memory (MDR, memory data register) and to place it in an appropriate register in the ALU (data register bank) (Figure 1.2). This continues until all required operands are in the appropriate registers of the ALU. Once all operands are in place, the control unit commands the ALU to perform the appropriate instruction-for example, multiplication, addition, or subtraction. If the instruction indicated that an input or output were required, the control element would transmit a word from the input unit to the memory or ALU, depending on the instruction. If an output instruction were decoded, the control unit would command the transmission of the appropriate memory word or register to the output channel indicated. These five elements comprise the fundamental building blocks used in the original von Neumann computer system and are found in most contemporary systems in some form or another.


Figure 1.2: Low-level memory access.

A computer system is comprised of the five building blocks previously described, as well as additional peripheral support devices, which aid in data movement and processing. These basic building blocks are used to form the general processing, control, storage, and input and output units that make up modern computer systems. Devices typically are organized in a manner that supports the application processing for which the computer system is intended-for example, if massive amounts of data need to be stored, then additional peripheral storage devices such as disks or tape units are required, along with their required controllers or data channels.

To better describe the variations within architectures we will discuss some details briefly-for example, the arithmetic logic unit (ALU) and the control unit are merged together into a central processing unit, or CPU. The CPU controls the flow of instructions and data in the computer system. Memories can be broken down into hierarchies based on nearness to the CPU and speed of access-for example, cache memory is small, extremely fast memory used for instructions and data actively executing and being used by the CPU. The primary memory is slower, but it is also cheaper and contains more memory locations. It is used to store data and instructions that will be used during the execution of applications presently running on the CPU-for example, if you boot up your word processing program on your personal computer, the operating system will attempt to place the entire word processing program in primary memory. If there is insufficient space, the operating system will partition the program into segments and pull them in as needed.

The portion of the program that cannot be stored in memory is maintained on a secondary storage device, typically a disk drive. This device has a much greater storage capacity than the primary memory, typically costs much less per unit of storage, and has data access times that are much slower than the primary memory. An additional secondary storage device is the tape drive unit. A tape drive is a simple storage device that can store massive amounts of data-again, at less cost than the disk units but at a reduced access speed. Other components of a computer system are input and output units. These are used to extract data from the computer and provide these data to external devices or to input data from the external device. The external devices could be end-user terminals, sensors, information network ports, video, voice, or other computers.

A computer system's architecture is constructed using basic building blocks, such as CPUs, memories, disks, I/O, and other devices as needed.

In the following sections we will examine each of the components of a computer system in more detail, as we examine how these devices can be interconnected to support data processing applications.

1.1.1 CPU architectures

The central processing unit (CPU) is the core of a computer system and consists of the arithmetic logic unit (ALU) and the control unit. The ALU can come in a variety of configurations-from a single simple unit, up to extremely complex units that perform complex operations. The primary operation of the ALU is to take zero or more operands and perform the function called for in the instruction. In addition to the ALU, the CPU consists of a set of registers to store operands and intermediate results of computations and to maintain information used by the CPU to determine the state of its computations. For example, there are registers for the status of the ALU's operation, for keeping count of the instruction to be performed next, to keep data flowing in from memory or out to memory, to maintain the instruction being executed, and for the location of operands being operated on by the CPU. Each of these registers has a unique function within the CPU, and each is necessary for various classes of computer architectures. A typical minimal architecture for a CPU and its registers is shown in Figure 1.3 and consists of a primary memory connected to the CPU via buses. There are registers in the CPU for holding instructions, instruction operands, and results of operations; a program location counter, containing either the location in memory for instructions or operands, depending on the decoding of instructions; and a program counter containing the location of the next instruction to perform.

click to expand
Figure 1.3: Typical CPU architecture.

The CPU also contains the control unit. The control unit uses the status registers and instructions in the instruction register to determine what functions the CPU must perform on the registers, ALU, and data paths that make up the CPU. The basic operation of the CPU follows a simple loop, called the instruction execution cycle (Figure 1.4). There are six basic functions performed in the instruction loop: instruction fetch, instruction decode, operand effective address calculation, operand fetch, operation execution, and next address calculation. This execution sequence represents the basic functions found in all computer systems. Variations in the number of steps are found based on the type and length of the instruction.

click to expand
Figure 1.4: Instruction cycle execution.

1.1.2 Instruction architectures

There are numerous ideas about how to organize computer systems around the instruction set. One form, which has come of age with the new powerful workstations, is the reduced instruction set computer (RISC), where each instruction is simple, but highly optimized. On the far spectrum of architectures is the very long word instruction architecture, where each instruction may represent an enormous processing function. A middle ground is the complex instruction set computer (CISC).

Memory-addressing schemes

There are also numerous ways in which to determine the address of an operand from an instruction. Each address computation method has its benefits in terms of instruction design flexibility. There are six major types of addressing computation schemes found in computers: immediate, direct, index, base, indirect, and two-operand. We will examine these further in Chapter 2.

1.1.3 Memory architectures

Generally, a computer system's memory is organized as a regular structure, addressed using the contents of a memory address register and with data transferred through a memory data register (Figure 1.5). Memory architectures are based on the organization of the memory words. The simplest form is a linear two-dimensional structure. A second organization is the two-and-a-half-dimensional architecture.


Figure 1.5: CPU memory access.

1.1.4 I/O architectures

Input and output architectures are used by computer systems to move information into or out of the computer's main memory and have evolved into many forms. I/O architectures typically rely on the use of one element of the computer as the router of I/O transfers. This router can be the CPU, the memory, or a specialized controller. Chapter 2 discusses these architectures in greater detail.

1.1.5 Secondary storage and peripheral device architectures

I/O devices connect to and control secondary storage devices. Primary memory has grown over the years to a fairly high volume, but still not to the point where additional data and program storage is not needed. The storage hierarchy (Figure 1.6) consists of a variety of data storage types. From the highest-speed memory element, cache, to the slowest-speed elements, such as tape drives, the tradeoff the systems architect must make is the cost and speed of the storage medium per unit of memory. Typical secondary storage devices include magnetic tape drives, magnetic disk drives, compact optical disk drives, and archival storage devices such as disk jukeboxes.

click to expand
Figure 1.6: Memory hierarchy.

Magnetic tape information storage provides a low-cost, high-density storage medium for low-access or slow-access data. An improvement over tape storage is the random access disk units, which can have either removable or internal fixed storage media. Archival storage devices typically are composed of removable media configured into some array of devices.

1.1.6 Network architectures

Networks evolved from the needs of applications and organizations to share information and processing capacity in real time. Computer networks provide yet another input and output path for the computer to receive or send information. Networks are architected in many ways: They could have a central switching element, share a central storage repository, or could be connected using intelligent interface units over a communications medium such as telephone wires or digital cables. The configuration used depends on the degree of synchronization and control required, as well as the physical distribution between computers. Chapter 2 will examine some architectures and topology configurations for networked computer systems.

1.1.7 Computer architectures

Computer architectures represent the means of interconnectivity for a computer's hardware components as well as the mode of data transfer and processing exhibited. Different computer architecture configurations have been developed to speed up the movement of data, allowing for increased data processing. The basic architecture has the CPU at the core with a main memory and input/output system on either side of the CPU (see Figure 1.7). A second computer configuration is the central input/output controller (see Figure 1.8). A third computer architecture uses the main memory as the location in the computer system from which all data and instructions flow in and out. A fourth computer architecture uses a common data and control bus to interconnect all devices making up a computer system (see Figure 1.9). An improvement on the single shared central bus architecture is the dual bus architecture. This architecture either separates data and control over the two buses or shares them to increase overall performance (see Figure 1.10).

click to expand
Figure 1.7: Basic computer architecture.

click to expand
Figure 1.8: Alternative computer architecture.

click to expand
Figure 1.9: Common bus architecture.

click to expand
Figure 1.10: Dual bus architecture.

We will see how these architectures and elements of the computer system are used as we continue with our discussion of system architectures and operations.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net