SteadyState ProbabilitiesThe probabilities of .33 and .67 in our example are referred to as steadystate probabilities . The steadystate probabilities are average probabilities that the system will be in a certain state after a large number of transition periods. This does not mean the system stays in one state. The system will continue to move from state to state in future time periods; however, the average probabilities of moving from state to state for all periods will remain constant in the long run. In a Markov process, after a number of periods have passed, the probabilities will approach steady state. Steadystate probabilities are average, constant probabilities that the system will be in a state in the future. For our service station example, the steadystate probabilities are
Notice that in the determination of the preceding steadystate probabilities, we considered each starting state separately. First, we assumed that a customer was initially trading at Petroco, and the steadystate probabilities were computed given this starting condition. Then we determined that the steadystate probabilities were the same, regardless of the starting condition. However, it was not necessary to perform these matrix operations separately. We could have simply combined the operations into one matrix, as follows :
until eventually we arrived at the steadystate probabilities:
Direct Algebraic Determination of SteadyState ProbabilitiesIn the previous section, we computed the state probabilities for approximately eight periods (i.e., months) before the steadystate probabilities were reached for both states. This required quite a few matrix computations . Alternatively, it is possible to solve for the steadystate probabilities directly, without going through all these matrix operations. Notice that after eight periods in our previous analysis, the state probabilities did not change from period to period (i.e., from month to month). For example,
At some point in the future, the state probabilities remain constant from period to period. Thus, we can also say that after a number of periods in the future (in this case, eight), the state probabilities in period i equal the state probabilities in period i + 1. For our example, this means that [ P _{ p } (8) N _{ p } (8)] = [ P _{ p } (9) N _{ p } (9)] In fact, it is not necessary to designate which period in the future is actually occurring. That is, [ P _{ p } N _{ p } ] = [ P _{ p } N _{ p } ] given steadystate conditions. After steady state is reached, it is not necessary to designate the time period. These probabilities are for some period, i , in the future once a steady state has already been reached. To determine the state probabilities for period i + 1, we would normally do the following computation:
However, we have already stated that once a steady state has been reached, then [ P _{ p } ( i + 1) N _{ p } ( i + 1)] = [ P _{ p } ( i ) N _{ p } ( i )] and it is not necessary to designate the period. Thus, our computation can be rewritten as
Performing matrix operations results in the following set of equations: P _{ p } = .6 P _{ p } + .2 N _{ p } N _{ p } = .4 P _{ p } + .8 N _{ p } Steadystate probabilities can be computed by developing a set of equations, using matrix operations, and solving them simultaneously . Recall that the transition probabilities for a row in the transition matrix (i.e., the state probabilities) must sum to one: P _{ p } + N _{ p } = 1.0 which can also be written as N _{ p } = 1.0 P _{ p } Substituting this value into our first foregoing equation ( P _{ p } = .6 P _{ p } + .2 N _{ p } ) results in the following:
and N _{ p } = 1.0 P _{ p } = 1.0 .33 = .67 These are the steadystate probabilities we computed in our previous analysis: [ P _{ p } N _{ p } ] = [.33 .67] Application of the SteadyState ProbabilitiesThe steadystate probabilities indicate not only the probability of a customer's trading at a particular service station in the long term future but also the percentage of customers who will trade at a service station during any given month in the long run. For example, if there are 3,000 customers in the community who purchase gasoline, then in the long run the following expected number will purchase gasoline at each station on a monthly basis:
Steadystate probabilities can be multiplied by the total system participants to determine the expected number in each state in the future. Now suppose that Petroco has decided that it is getting less than a reasonable share of the market and would like to increase its market share. To accomplish this objective, Petroco has improved its service substantially, and a survey indicates that the transition probabilities have changed to the following:
In other words, the improved service has resulted in a smaller probability (.30) that customers who traded initially at Petroco will switch to National the next month. Now we will recompute the steadystate probabilities, based on this new transition matrix:
Using the first equation and the fact that N _{ p } = 1.0 P _{ p } , we have
and thus N _{ p } = 1 P _{ p } 1 .4 .6 This means that out of the 3,000 customers, Petroco will now get 1,200 customers (i.e., .40 x 3,000) in any given month in the long run. Thus, improvement in service will result in an increase of 210 customers per month (if the new transition probabilities remain constant for a long period of time in the future). In this situation Petroco must evaluate the tradeoff between the cost of the improved service and the increase in profit from the additional 210 customers. For example, if the improved service costs $1,000 per month, then the extra 210 customers must generate an increase in profit greater than $1,000 to justify the decision to improve service. This brief example demonstrates the usefulness of Markov analysis for decision making. Although Markov analysis does not yield a recommended decision (i.e., a solution), it does provide information that will help the decision maker to make a decision. Markov analysis results in probabilistic information, not a decision. Determination of Steady States with QM for WindowsQM for Windows has a Markov analysis module, which is extremely useful when the dimensions of the transition matrix exceed two states. The algebraic computations required to determine steadystate probabilities for a transition matrix with even three states are lengthy; for a matrix with more than three states, computing capabilities are a necessity. Markov analysis with QM for Windows will be demonstrated using the service station example in this section. Exhibit F.1 shows our example input data for the Markov analysis module in QM for Windows. Note that it is not necessary to enter a number of transitions to get the steadystate probabilities. The program automatically computes the steady state. "Number of Transitions" refers to the number of transition computations you might like to see. Exhibit F.1.Exhibit F.2 shows the solution with the steadystate transition matrix for our service station example. Exhibit F.2.
