Chapter 3: Fundamental Concepts and Performance Measures

 < Free Open Study > 



3.1 Introduction

Computer systems architects and designers look for configurations of computer systems elements so that system performance meets desired measures. What this means is that the computer system delivers a quality of service that meets the demands of the user applications. But the measure of this quality of service and the expectation of performance vary depending on who you are. In the broadest context we may mean user response time, ease of use, reliability, fault tolerance, and other such performance quantities. The problem with some of these is that they are qualitative versus quantitative measures. To be scientific and precise in our computer systems performance studies, we must focus on measurable quantitative qualities of a system under study.

There are many possible choices for measuring performance, but most fall into one of two categories: system-oriented or user-oriented measures. The system-oriented measures typically revolve around the concepts of throughput and utilization. Throughput is defined as the average number of items (e.g., transactions, processes, customers, jobs, etc.) processed per unit of measured time. Throughput is meaningful when we also know information about the capacity of the measured entity and the presented workload of items at the entity over the measured time period. We can use throughput measures to determine systems capacity by observing when the number of waiting items is never zero and determining at what level, based on the system's presented workload, the items never wait. Utilization is a measure of the fraction of time that a particular resource is busy. One example is CPU utilization. This could measure when the CPU is idle and when it is functioning to perform a presented program.

The user-oriented performance measures typically include response time or turnaround time. Response time and turnaround time refer to a view of the system's elapsed time from the point a user or application initiates a job on the system and when the job's answer or response is returned to the user. From this simple definition it can readily be seen that these are not clear, unambiguous measures, since there are many variables involved. For example, I/O channel traffic may cause variations in the measure for the same job, as would operating systems load, or CPU loads. Therefore, it is imperative that if this measure is to be used, the performance modeler must be unambiguous in his or her definition of this measure's meaning. These user measures are all considered random, and, therefore, their measures are typically discussed in terms of expected or average values as well as variances from these values.

In all cases, however, to make such measurements we need some basic understanding of the environment and its parameters with which we are working. One fundamental concept is that of time. To measure a physical phenomenon we need a metric to measure it against. In computer systems this metric is typically time. Time alone, however, is not sufficient; we need to have a place from which to mark time. This place is sometimes driven by an event in the system to be measured or simply a specified time. For example, in a computer system we may wish to measure the time a transaction takes to execute within a database system. We need to define the events of interest for this transaction system-for example, beginning the transaction, running the transaction, and ending or commitment of the transaction. Given that we have time and events, we next need to define when and how we measure these events and the intervals of interest for these events.

Other basic concepts needed for our discussions of computer systems performance include the means by which one measures or samples a system. Measurements can take on many forms within an evaluation project, as will be seen. Another aspect of time, which is important in computer systems performance studies, is that of intervals. An interval represents a measured distance of time representing a measured distance in a time period. For example a day, week, or month represents intervals of time. Most important to computer systems evaluation is the concept of response. Response represents a completion event for a measured entity-for example, the time between when a key is hit on a computer terminal and the user receives the result.

To utilize the basic quantities of time, events, intervals, and response, we need some additional concept concerning the relationships between all of these items. The typical concerns we have deal with the concepts of independence and randomness as they relate to the items within a computer system. Last, but not least, the concept of a workload and the relationship this plays with a modeling project must be defined.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net