10.9 Beyond the Basics


The approach adopted in this chapter has been to present Markov models in their simple, straightforward, and practical (i.e., vanilla) versions. The basics have been presented and applied via examples. There are many subtle points and extensions that are not addressed in depth here. To do so would require an entire text, developing and then expanding upon a richer set of basic definitions. To gain some perspective, the following definitions and facts are briefly presented. They are illustrated in the context of Fig. 10.7.

Figure 10.7. Generic Markov model.

graphics/10fig07.gif

  • Recurrent state: A recurrent state in a Markov model is a state that can always be revisited in the future after leaving the state, regardless of the subsequent states visited. In the figure, states A, B, C, F, G, H, and I are recurrent states.

  • Transient state: A transient state in a Markov model is a state where, depending on the subsequent states visited, it may not be possible to return to the state after leaving it. States D and E are transient, since, say, after leaving state D, the system enters state A. It is then not possible to ever return to states D or E.

  • Fact: Each state in the Markov model is either recurrent or transient. The set of recurrent states and the set of transient states is mutually exclusive and collectively exhaustive.

  • Fact: All states reachable from a recurrent state are recurrent.

  • Fact: All states that can reach a transient state are transient.

  • Periodic state: A periodic state is a recurrent state where the system can only return to the periodic state in p, 2p, 3p, ..., steps, where p is the period, is as large as possible, and where p > 1. States A, B, and C are periodic with a period of 3. The self loop around state H prohibits it (and also states F, G, and I) from being periodic. For instance, state F can return to itself in 4, 5, 6, 7, 8, 9, ..., steps. Since p must be greater than 1, F is not periodic since it can return to itself in either 4 or 5 steps.

  • Fact: All states reachable from a periodic state are periodic with the same period.

  • Chain: A chain is a set of recurrent states that can all reach each other. The set is as large as possible. States A, B, and C form one chain. States F, G, H, and I form another chain. The diagram is, thus, a multi-chain Markov model.

  • Discrete state, discrete transition: A Markov model is discrete state, discrete transition if the number of states is countable and the transitions between states can only take place at known intervals. The Markov model of the England example (i.e., Fig. 10.4) is an example of a discrete state, discrete transition Markov model. It has four states and transitions can only take place on known (i.e., day) boundaries. The arc weights in these models are probabilities.

  • Discrete state, continuous transition: A Markov model is discrete state, continuous transition if the number of states is countable and the transitions between states can take place at any time, driven by an exponential distribution. The Markov model of the database server example (i.e., Fig. 10.5) is an example of a discrete state, continuous transition Markov model. It has six states and transitions can take place at any time. The transitions are governed by the speed of the devices, which are assumed (although unstated earlier) to be exponentially distributed. The arc weights in these models are flow rates.

  • Fact: Discrete state, continuous transition Markov models do not have periodic states. This is since transitions can take place at any time. Therefore, it has been assumed implicitly that Fig. 10.7 is a discrete state, discrete transition model with (unstated) non-zero probabilities as arc weights.

  • Major fact: Any finite Markov model, with no periodic states and whose recurrent states are all in the same chain, will have limiting state probabilities that are independent of the initial starting state. That is, such models will have steady state. The example in Fig. 10.7 does not have steady state that is independent of the initial starting state because it has two chains, not one. If the system starts in state A, the system will cycle between states A, B, and C and it will never enter the other states. Similarly, if the system starts in state F, the system will remain in states F, G, H, and I and will never enter the other states. Thus, in this example, the probability of being in a state is very much dependent on the starting state. Also, if the diagram only consisted of states A, B, and C, even though it would have a single chain, it still would not have steady state probabilities that are independent of the starting state because the states are periodic. For instance, if the system started in state A at time 0, it would be known exactly in which state the system would be in at any given time. For example, the probability that it would be in state B at time 3,000,001 would be 1.0 while the probability of being in state B at time 3,000,002 would be 0. Such systems are deterministic in nature.

  • Fact: The steady state probability of being in a transient state is 0. For instance, suppose that states A, B, and C, were not in the model. Steady state would exist. Even if the system started in state D, the system may alternate between states D and E for some arbitrary length of time, but eventually the system will enter state F, never to return to states D or E. Thus, the long term (i.e., steady state) probability of being in states D or E is 0, and the probability of being in states F, G, H, and I would depend on their arc weights.

The Markov models of interest are those which have steady state behavior. The two motivating examples used throughout this chapter are representative (albeit small) illustrations.



Performance by Design. Computer Capacity Planning by Example
Performance by Design: Computer Capacity Planning By Example
ISBN: 0130906735
EAN: 2147483647
Year: 2003
Pages: 166

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net