The model solution step is surprisingly the most straightforward and easiest aspect of the entire modeling paradigm. Robust software solution packages based on the mean value analysis (MVA) technique (and provided with this text) are available.
Given a parameterized Markov model, the definition of "model solution" is to find the long term (i.e., the "steady state") probability of being in any particular state. This refers to the probability of taking a random snapshot (or checkpoint) of the system and finding the system in a particular state. Steady state is independent of the initial starting state of the system. For instance, consider the England example. If the lad starts out, say, kayaking in the Lake District, there is a good chance (i.e., 70%) that if a snapshot is taken one day later, he will be still be kayaking the following day. However, this does not mean that on some given random day, say, two months from now, that the lad will be found kayaking with a probability of 70%. The steady state solution is the overall probability of being in each system state, independent of the starting state.
To find these steady state probabilities, a set of linear "balance" equations is formed. In general, there is one balance equation for each system state. Thus, given a Markov model with N states, there are N desired unknowns (i.e., the steady state probability of being in each state) along with N linear equations. The solution is a straightforward linear algebra math problem. This is illustrated on each of the two motivating examples.
Random Walk Through England: Model Solution
Reconsider Fig. 10.4. The balance equation for each system state is one that represents the fact that the overall "flow" into each state must equal the overall flow out of each state. That is, on average, the lad must walk out of the Leeds pub as many times as he walks in. (The consequences of any other result would reduce the system to one where the lad is either always or never in the pub.)
Let Pi represent the (steady state) probability of being in state i. Thus, P1 represents the probability that the lad is in the Leeds pub, P2 represents the probability that the lad is sightseeing in London, P3 represents the probability that the lad is kayaking in the Lake District, and P4 represents the probability that the lad is hiking in the Yorkshire moors.
Consider, for example, state 2 (i.e., sightseeing in London). The flow into state 2 is represented by the single incoming arc to state 2 in Fig. 10.4. That is, the only way to get to London is to first be in state 1 (i.e., in Leeds, which has a steady state probability of P1) and then to select (with probability 0.6) to go the London the next day. Thus, the flow into state 2 is 0.6 x P1. The flow out of state 2 is represented by the two outgoing arcs from state 2. That is, the only way to leave state 2 is to first be in state 2 (i.e., with probability P2) and then to select (with probability 0.2 + 0.8) to leave state 2 the next day. Thus, the flow out of state 2 is simply (0.2 + 0.8) x P2 = P2. Therefore, the overall (flow in equals flow out) balance equation for state 2 is:
Similarly, consider state 1. The flow into state 1 is represented by the sum of the incoming arcs from the other three states, that is, (0.2 x P2) + (0.1 x P3) + (0.3 x P4). The flow out of state 1 is represented by the single outgoing arc, 0.6 x P1. Thus, the overall balance equation for state 1 is:
[Note: The self loop from state 1 back to itself represents the probability that the lad remains in Leeds on successive days. While important, this probability can be inferred from the other outgoing transition probabilities from state 1. That is, since the sum of the outgoing probabilities to other states is 0.6, the remaining 0.4 proportion must be a self-loop. The self loop does not directly exhibit itself in the state's balance equation. The reason is that, strictly speaking, it appears as both an incoming and outgoing arc on opposite sides of the state's balance equation, thereby cancelling each other out. Expressed differently, there are four incoming arcs to state 1, and two outgoing arcs, including the self loop. Therefore, the full balance equation for state 1 is:
which is equivalent to the previous equation. Thus, for simplicity and without loss of generality, self loops can be ignored when creating the balance equations.]
The balance equations for Fig. 10.4, for states 1, 2, 3, and 4, respectively, are:
Thus, there are four equations and four unknowns. At this point, it is tempting to infer that simple algebra can be used to uniquely solve for the desired results P1, P2, P3, and P4. However, there is one small issue. This set of equations (and, in general, any set of balance equations from a Markov model) is under-determined. This means that any three of the equations can be used to infer the fourth equation. (To verify this remark, simply add the left hand sides of the first three equations and set it equal to the sum of the right hand sides of the first three equations. By simplifying, the result is equal to the fourth equation. Thus, the fourth equation can be inferred from the first three.) This means that one of the equations is redundant, is unnecessary, and can be eliminated.
To find the unique solution, one of the equations is replaced by the "conservation of total probability equation" which states that the system must be in one of the system states. That is,
By replacing, say, the fourth equation with this conservation equation, the resulting set of (now independent) four equations in four unknowns is:
The linear algebraic solution to this set of equations is:
These represent the steady state probabilities of the lad being in each of the four states, 1 through 4, respectively.
Database Server Support: Model Solution
Reconsider Fig. 10.5. Mimicking the previous example, the "flow in = flow out" balance equation for each of the six states can be formed easily. The only difference is that, instead of arc weights being probabilities as in the previous example, the arc weights in the database server example are based on the service rates of the devices. [Note: Steady state probabilities do not depend on the absolute value of the arc weights, but rather on their relative values. That is, by multiplying all of the arc weights by any constant, the steady state values remain the same.] The balance equations for the six states, (2,0,0), (1,1,0), (1,0,1), (0,2,0), (0,1,1), and (0,0,2), are:
As before, one of the equations is "redundant." Therefore, by arbitrarily replacing the last equation by the conservation of total probability equation,
the six equations in six unknowns can be uniquely solved to find the steady state probabilities for the six states: