Chapter 11: System Performance Evaluation Tool Selection and Use

 < Free Open Study > 



Once we have decided to perform an assessment of performance for some target computer system, we still must decide which of the techniques we have discussed is the most appropriate for the proposed performance study. Many different considerations must be taken into account before we make such a decision.

11.1 Tool selection

The four techniques for computer systems performance evaluation include analytical modeling, Petri net modeling, simulation modeling, and empirical or testbed analysis. Depending on the criteria placed on the computer systems analysis, some rough selection metrics can be determined. The most important criterion deals with the stage of the computer systems life cycle. For example, measurements are only available as a modeling possibility if the system already exists, or something similar exists. On the other hand, if it is a new computer system, which has not been built, then analytical modeling, Petri nets, or simulation modeling makes more sense. If we are in the earliest phases of the life cycle, when we are examining tradeoffs on many components, we may wish to use analytical modeling, since it can provide relatively quick answers to tradeoff questions, allowing us to determine early on if a subset of n alternatives is best for more detailed modeling. Once we have completed this rough analysis, and narrowed our choices of alternatives to some smaller subset, we would probably wish to apply Petri nets to further refine our choices. Petri nets add the ability to model and trade off concurrency, conflict, and synchronization, something impossible to accomplish with analytical modeling. Once we have completed our analysis using Petri nets and have further narrowed our choices to only a few components, we could next look toward simulation. Simulation provides the ability to produce very detailed models of a target system or just some specific contentious component. The goal at each of these early stages of a computer system's design and development is to narrow the number of choices to allow us to optimally choose the best architecture and components for a given computer system's applications requirements. Finally, once the system is constructed, we would apply empirical modeling. This would allow us to verify that our early modeling was correct and to possibly identify areas where our new system could be further refined and improved before delivery to a customer.

The next criterion for consideration when deciding on which modeling tool to use is the time we have to do the modeling task. In most situations a model is requested because some problem has occurred, and an answer to it was needed last week. There is a saying that time is money, and in computer systems modeling it is no different. If we have all the time in the world to perform our evaluations, then we probably would walk through each model, refining our analysis as was defined under the criterion of the time stage. The problem is that we typically do not have such a luxury. If time is short, then we typically can only use analytical or Petri net modeling, with analytical modeling winning out if time is very short. If time is important, but not critical, then we would look at Petri nets and simulation as being the next models of choice. Petri nets take less time to develop than simulations but would also provide us with possibly less detailed analysis information. If the system exists, then measurements may be appropriate over simulation modeling, if the number of alternatives we are looking at is small. If the number of alternatives is significant, then simulation would win out, even though it typically would take more time than measurements.

The third modeling tool selection criterion is referred to as tool availability. When we say availability we mean many different aspects. The first to come to mind is availability of a computer-based tool. For example, if we had a tool allowing us to simply define queuing models and to vary modeling factors and system component characteristics, then analytical modeling would be much easier to apply. On the other hand, if no such tool exists that can support the kind of model we are proposing, then by availability we imply that the modelers have the capability and knowledge to construct an analytical model and perform the tradeoff analysis using this model. Likewise, if we are looking to use Petri nets, we first would check if existing computer-based tools exist. Second, do we have modelers who have knowledge of the tools, and, third, if no tools exist, does our modeling staff have the knowledge to construct a Petri net model of the target computer system? If we are looking toward constructing a simulation model, we would first look to see if a simulation tool exists off the shelf that provides the class of model we require. For example, one can readily purchase a number of simulation tools aimed at network analysis, possibly some for architectures and probably none for operating systems. If a specific model exists, we must determine if it meets the needs of the modeling task, and, if not, can it be tailored to meet the demands. If existing tools do not suffice for the modeling task, we must select a general-purpose simulation language, or general-purpose programming language, and construct our simulation model from scratch. This is a time-consuming and laborious task, requiring performance modelers with the requisite simulation design and programming skills.

The selected modeling tool's ability to deliver accurate information concerning the system under analysis is also very important. Regarding accuracy, we want to know if the model delivers information that would closely map to the target system. Analytical models require the modeler to make many tradeoffs and assumptions to simplify the development of the model and to make it tractable. Such simplifications make the results also suspect. Petri nets suffer from similar problems, but they are not as severe as in the analytical model case. Simulations allow the modeler to incorporate more details from the target computer system and may require less assumptions, thereby mapping closer to the target system. Measuring the target system may provide the best results but is also subject to possible problems caused by the measurement technique applied. If we use software monitoring, the monitor load on the system may be significant enough to throw off the accuracy of the results significantly. This criterion must not be overlooked and must be fully understood when making a decision on selecting a modeling tool to use.

The fourth criterion applicable when deciding on which modeling tool to use is that of the model's ability to compare different alternatives simply and completely. If a model does not provide the capability to alter parameters and check alternatives, then it is not providing the capability required of a performance tradeoff study. The least flexible tool is the testbed and empirical models. These are very difficult to change, since we would require possibly multiple components being integrated into the environment to test alternative components or, if we are comparing entire systems, having these entire systems available. Analytical models can be quickly altered to examine different configurations or components and, therefore, make an attractive tool for analysis requiring numerous tradeoff studies. Petri nets are also similar to analytical models and lend themselves to fairly easy alteration. Simulation models can be constructed so that they also provide the ability to trade off various components against each other. For example, if we are trading off memory-management protocols, we could implement them all in one module, and keep all of the remaining components of the simulation model unchanged. Such an approach would readily allow us to focus on the differences each of these protocols would provide in the given system.

A selection criterion often overlooked by the modeling team is that of cost. Most modeling projects focus on the goal at hand and don't always treat this project like any other engineering project, where both performance and cost must be considered. The cost can include the system under study, the tool to be used in performing tradeoff studies, and the modeling staff. Intuitively, one can see that if we use empirical or testbed systems as our tools, the cost will consist of the cost of the actual system, plus the cost of setting up these systems for measurements and the cost of the performance assessment staff doing the assessment. These costs can far exceed the budget for most but the largest system development projects. In addition, the cost of altering systems between analysis runs may be prohibitive and may not even be possible. Because of this, simulations are typically used in large systems analysis projects, where many tradeoff studies are required. The simulation is much easier to alter and run than the real system or even a testbed. Finally, analytical and Petri net models may be the least expensive to produce, since they do not typically require large software developments or implementations. The major cost in these types of studies would be the analyst's salary and time.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net