The goal of Part I of this text has been to provide a framework from which system designers can reason about performance related concepts and apply these concepts throughout the entire life cycle of IT systems. Performance terms (e.g., response time, throughput, availability, reliability, security, scalability, extensibility) have been introduced. Performance results based on the operational laws (e.g., Utilization Law, Service Demand Law, Forced Flow Law, Little's Law, Interactive Response Time Law) have been defined and applied to sample systems. Simple performance bounding techniques and basic queuing network models have been established as tools to evaluate and predict system performance.
The goal of Part II is to motivate, establish, and explain the basic building blocks and underpinnings of the analytical techniques introduced in Part I. It is one thing to be able to apply effectively any given performance tool. This is an important and quite useful skill. However, the deeper level is to be able to understand what is inside "the black box" and what comprises the various tools. By understanding the inner workings and assumptions of the modeling techniques, they can be applied more reliably and with more confidence. The analogy with an automobile is appropriate. Learning the functionality of, and how to drive, an automobile is a useful skill that requires practice to improve. However, it is by learning how the engine operates, and the likely resulting consequences if not properly used, that makes one a truly effective (and safe!) driver. Part II provides such a basis.
This chapter describes Markov models. These models are the fundamental building blocks upon which most of the quantitative analytical performance techniques are built. Markov models themselves are based on state space diagrams. Such diagrams are powerful descriptive tools. They are both intuitive and natural, understandable by novices, yet rich enough to challenge experts. As will be seen, they can be applied across a wide range of applications. Markov models are often used to explain the current interactions between various system components. However, they can also be used for predictive purposes. Once a model is constructed, parameterized, and validated, it can be altered to predict (hopefully, accurately) what would happen if various aspects of the system's hardware or of the system's workload change. Thus, Markov models can be used for both descriptive and predictive purposes.
Basically, Markov models are relatively simple to create, solve, and use. To create such a model, the first step is to construct the state diagram by identifying all possible states that the modeled system may find itself. Second, the state connections (i.e., transitions) must be identified. Third, the model must be parameterized by specifying the length of time spent in each state once it is entered (or, equivalently, the probability of transitioning from one state to another within the next time period). After the model is constructed, it is "solved." This involves abstracting a set of linear "balance" equations from the state diagram and solving them for long term "steady state" probabilities of being in each system state. Once solved, the model can be validated and used for various performance prediction applications. Each of these aspects will be described and intuitively motivated via examples in this chapter.
The chapter outline is as follows. In Section 10.2, the overall context of system modeling is presented. In Section 10.3, two motivating examples are introduced which will be used throughout the chapter. Model construction is described in Section 10.4, followed by model solution techniques in Section 10.5. The interpretation and effective use of Markov models is the topic of Section 10.6. Section 10.7 provides the assumptions and limitations of Markov models. Topics that are somewhat beyond the basics are mentioned in Section 10.8. The chapter concludes by summarizing the primary points and suggesting several illuminating exercises.