4.4 A Model-based Methodology


This section describes the basic steps used to design and analyze computer systems with performance in mind. The methodology builds on workload and performance models and can be used throughout the phases of the system lifecycle. Performance engineering provides a series of steps to be followed in a systematic way [2]. Figure 4.4 gives an overview of the main steps of the quantitative approach to analyze the performance of a system.

Figure 4.4. A model-based performance engineering methodology.

graphics/04fig04.gif

The starting point of the methodology is to specify system performance objectives. These objectives should be quantified as part of the system requirements. Performance objectives are used to establish service level goals. Service level goals are defined, business metrics are established, and the performance goals of the system are documented. Once the system and its quantitative objectives have been determined, one is able to go through the quantitative analysis cycle. The various steps of the quantitative analysis cycle are:

  • Understand the system. The first step is to obtain an in-depth understanding of the system architecture and conduct an architecture-level review with emphasis on performance. This means answering questions such as: What are the system requirements of the business model? What type of software (i.e., operating system, transaction monitor, DBMS, application software) is going to be used in the system? This step yields a systematic description of the system architecture, its components, and goals. It is an opportunity to review the performance issues of the proposed architecture.

  • Characterize the workload. In this step, the basic components that compose the workload are identified. The choice of components depends both on the nature of the system and the purpose of the characterization. The product of this step is a statement such as "The workload under study consists of e-business transactions, e-mail messages, and data-mininig requests." The performance of a system with many clients, servers, and networks depends heavily on the characteristics of its load. Thus, it is vital in any performance engineering effort to understand clearly and characterize accurately the workload [8, 13]. The workload of a system can be defined as the set of all inputs that the system receives from its environment during any given period of time. For instance, if the system under study is a database server, then its workload consists of all transactions (e.g., query, update) processed by the server during an observation interval.

  • Measure the system and obtain workload parameters. The third step involves measuring the performance of the system and obtaining values for the parameters of the workload model. Measurement is a key step to all tasks in performance engineering. It allows one to understand the parameters of the system and to establish a link between a system and its model. Performance measurements are collected from different reference points, carefully chosen to observe and monitor the environment under study. For example, consider that a database server is observed during 10 minutes and 100,000 transactions are completed. The workload of the database during that 10-minute period is the set of 100,000 transactions. The workload characteristics are represented by a set of information (e.g., arrival and completion time, CPU time, and number of I/O operations) for each of the 100,000 database transactions.

  • Develop performance models. In the fourth step, quantitative techniques and analytical (or simulation or prototype) models are used to develop performance models of systems. Performance models are used to understand the behavior of complex systems. Models are used to predict performance when any aspect of the workload or the system architecture is changed. Simple models based on operational analysis discussed in Chapter 3 are accessible to software engineering practitioners. They offer insight into how software architectural decisions impact performance [11].

  • Verify and validate the models. The fifth step aims at verifying the model specifications and validating the model's results. This step applies to both, performance and workload models. A performance model is said to be validated if the performance metrics (e.g., response time, resource utilizations, throughputs) calculated by the model match the measurements of the actual system within a certain acceptable margin of error. As a rule of thumb, resource utilizations within 10%, system throughput within 10%, and response time within 20% are considered acceptable [16]. A model is said to be verified if its results are an accurate reflection of the system performance [25]. Details of performance model calibration techniques are available [16]. In summary, this step answers questions such as: Is the right model for the system being considered? Does the model capture the behavior of the critical components of the system?

  • Forecast workload evolution. Most systems suffer modifications and evolutions throughout their lifetime. As a system's demands change, so do workloads. Demands grow or shrink, depending on many factors, such as the functionalities offered to users, number of users, hardware upgrades, or changes to software components. The sixth step forecasts the expected workload for the system. Techniques and strategies for forecasting [12] workload changes should provide answers to questions such as: What will be the average size of e-mails by the end of next year? What will be the number of simultaneous users for the online banking system six months from now?

  • Predict system performance. Performance guidance is needed at each stage of the system lifecycle, since every architectural decision can potentially create barriers in achieving the system performance goals. Thus, performance prediction is key to performance engineering work, because one needs to be able to determine how a system will react when changes in load levels and user behavior occur or when new software components are integrated into the system. This determination requires predictive models. Experimentation, is not usually viable because fixing performance defects may require structural changes that are expensive. In the seventh step, performance models are used to predict the performance of a system under many different scenarios.

  • Analyze performance scenarios. Validated performance and workload models are used to predict the performance of a system under several different scenarios, such as upgraded servers, faster networks, changes in the system workload, changes to the user behavior, and changes to the software system. To help find the most cost-effective system architecture, different scenarios are analyzed in this step. Basically, each scenario consists of a future system feature and/or a workload forecast. Because every forecast item carries a certain degree of uncertainty, several possible future scenarios are considered. Different possible system architectures are analyzed. A selection of alternatives is generated so that system engineers may choose the most appropriate option, in terms of cost/benefit.

Two models are central components of the methodology: the workload model and the system performance model. Workload models are studied in detail in this chapter. Performance models were introduced in Chapter 3. Techniques for constructing models of different types of systems are developed in Chapters 10 through 15.



Performance by Design. Computer Capacity Planning by Example
Performance by Design: Computer Capacity Planning By Example
ISBN: 0130906735
EAN: 2147483647
Year: 2003
Pages: 166

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net