10.2 Modeling Context


Consider the modeling paradigm described in Fig. 10.1. The initial step in evaluating the performance of a system is to construct a contextual model of the system being considered. The focus within this chapter is on basic Markov models. However, other models are also viable, including prototype models, simulation models, and analytical models. Prototype models involve the physical construction of a scaled version of the actual system and executing a typical workload on the prototype. The primary advantage is one of accuracy and the primary disadvantage is one of cost. Simulation models involve the writing of detailed software programs which (hopefully accurately) emulate the performance of the system. Trace driven simulations take a script from a typical workload (e.g., arrival times of requests, details of the request) and then mimic the behavior of the actual system. Simulations tend to be less accurate, yet much less costly than prototype models. Analytical models, which include Markov models, involve capturing the key relationships between the architecture and the workload components in mathematical expressions. For instance, instead of relying on a specific execution trace to provide the time between successive requests to a disk, a random variable from a representative distribution is used. The operational laws introduced in Chapter 3 are examples of mathematical relationships used by analytical models. The key advantages of such models are that they capture and provide insight into the interdependencies between the various system components. They are also flexible, inexpensive, and easily changed. The disadvantages are a lack of detail and they tend to be more difficult to validate. Thus, there are tradeoffs between the various modeling techniques.

Figure 10.1. Modeling paradigm.

graphics/10fig01.gif

Within the context of Markov models, model construction consists of three steps: state space enumeration, state transition identification, and parameterization. State space enumeration involves specifying all reachable states that the system might enter. State transition identification indicates which states can be directly entered from any other given state. Parameterization involves making measurements and making assumptions of the original system. As will be seen with Markov models, model construction involves identifying not only all the states in which the system may find itself, but also how long one typically stays in each state and which states are immediately accessible from any given state. Measurements, intuition, published results from the literature, and various assumptions are often used to parameterize a model.

After model construction, the model must be "solved." With prototype models, this involves running an experiment on the newly constructed hardware and monitoring its performance. With simulation models, this involves running a software package (i.e., the simulator) and recording the emulated performance results. With analytical models, this involves solving a set of mathematical equations and interpreting the performance expressions correctly.

Once constructed and solved, the model must be calibrated. Calibration involves comparing the performance results obtained from the model against those observed in the actual system. Any discrepancies are resolved by questioning each component and each assumption in the modeling process. Often, one must return to a previous step since modeling errors may be discovered during calibration. It is not atypical to cycle between the various steps before an acceptable model is found. Because of this iterative process of refining a model to match the actual system, the resulting model is an acceptable calibrated model. It is calibrated to match the actual system on a finite set of previously observed (i.e., baseline) performance measures.

Once calibrated, the baseline model can be used for the important purpose of prediction. As in a model of the weather, a model which only matches previously observed weather patterns is of little use. One is much more interested in (and impressed by) a model that can predict what will happen before it actually does. The same is true in computer modeling. By altering the baseline model (e.g., adding in future growth rate parameters, changing the hardware parameters to reflect anticipated upgrades) and then re-solving the model, one can predict the future performance of a system prior to its occurrence.

The final step is one of accountability a validity check that the prediction is accurate. Too often, this step is ignored. It is more normal to make a prediction, collect a consultant's fee, go on to another project (or vacation!), and never return to see if one's prediction actually came true. Time is usually the reason. Often there is a significant time lag of several months between when a prediction is made and when the new system is implemented. Also, it is common that assumptions change from when a prediction is originally done. In either case, the performance prediction analyst rarely faces final accountability. However, it is only by completing this final check, by answering those harder series of questions when the predictions are incorrect, and by returning to a previous step in the modeling process, that the overall modeling paradigm is improved and the resulting prediction model is truly validated.

Markov models are described in the context of this overall modeling paradigm. They form an analytical modeling technique that is the basis of other analytical techniques (e.g., queuing network models, Petri net models). Emphasis will be placed on the model construction, solution, and prediction steps. Two motivating examples will be used to demonstrate the methodology.



Performance by Design. Computer Capacity Planning by Example
Performance by Design: Computer Capacity Planning By Example
ISBN: 0130906735
EAN: 2147483647
Year: 2003
Pages: 166

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net