6.5 Markov process

 < Free Open Study > 



6.5 Markov process

A Markov process is a stochastic process with some additional properties. If stochastic processes' future state probabilities only depend on the present state probabilities and not how they reached this state, then it is a Markov process.

More formally, a stochastic process {X(t), t T} is a Markov process if for any set of n + 1 values t1 < t2 < ... < tn < tn+1 in the index set and any set of states {x1, x2 ... , xn, xn+1}:

(6.33) click to expand

All birth-death processes are Markov processes; hence, the Poisson process is also a Markov process (Figure 6.12).

click to expand
Figure 6.12: Mapping of Markov process to other stochastic processes.

A discrete-state Markov process is called a Markov chain. Markov chains consist of discrete states {E0,E1,E2, ...}. These states typically are described using nonnegative integers {0,1,2,3,4, ...} instead of the more formal description given previously. A discrete time Markov chain makes state transitions at times tn, n = 1,2,3, ... (possibly into the same state). The notation for this transition and the resultant state is:

(6.34) click to expand

We will normally be interested only in Markov chains that have stationary state transition probabilities:

(6.35) 

The state transition probability, pij, represents the probability of transitioning from state i to state j. For an entire Markov chain, we represent the collection of all such transition probabilities as a state transition matrix P, as shown in Figure 6.13.

click to expand
Figure 6.13: Example probability state transition matrix.

The requirements for entries in the state transition probability matrix are as follows:

(6.36) 

(6.37) 

An example using such a matrix will involve a sequence of Bernoulli trials. In a Bernoulli experiment there can only be success or failure. An experiment succeeds with a probability of p and fails with a probability of q = 1 -p. In this example, we assume that the state at trial n, with the value Xn, is the number of uninterrupted successes (i.e., length of consecutive successes). In the example suppose the following experiments occur. The values for the sample space, index n and Xn, are shown as:

Sample:

trial:

F

S

S

F

F

S

S

S

F

 

n =

0

1

2

3

4

5

6

7

8

 

Xn =

0

1

2

0

0

1

2

3

0

The state transition diagram is shown in Figure 6.14.

click to expand
Figure 6.14: State transition diagram.

The resulting probability state transition matrix is composed of the following elements, as seen in Figure 6.15.


Figure 6.15: Transition probability matrix.

Using these probabilities we now wish to compute the state probabilities for the entire graph, assuming equilibrium as before.

If we let represent the probability of being in state j after the nth step (transition):

(6.38) 

then:

(6.39) 

Finite states: j = 0,1,2,..., n - 1:

(6.40) 

let , N × N matrix:

(6.41) click to expand

Then in vector notation:

(6.42) click to expand

Stationary (homogeneous) transition probabilities:

(6.43) click to expand

An example, assume we have a communications system that transmits the digits 0 and 1 through several stages (Figure 6.16). We assume that at each stage there is a probability of 0.75 that the output will be the same digit as the input.


Figure 6.16: Communications systems stages.

click to expand

One question we may ask is what the probability that a 0 entering the first stage is output as a 0 from the fourth stage. The solution requires representing the problem as a Markov chain and probability a matrix solution: Let the state at steps n, Xn, denote the value output by the nth stage.

Assume a 0 is input to stage 1 as shown in Figure 6.17.


Figure 6.17: Transition probabilities for the communications systems of Figure 6.16.

What is the probability that X4 = 0?

(6.44) 

Let:

(6.45) 

Given the probability state transition matrix:

click to expand

The general solution using stationary Markov chains yields: (n) = (0)Pn, n step transition probability matrix.

6.5.1 Markov chain definitions

State j is said to be reachable from state i if it is possible for the chain to proceed from state i to state j in a finite number of transitions:

(6.46) 

If every state is reachable from every other state, the chain is said to be irreducible. Using the Bernoulli coin toss trials as before, we get the transition state diagram shown in Figure 6.18.


Figure 6.18: Transition state diagram (Bernoulli trials, coin toss).

In Figure 6.18, we can see that if we are in any of the states, we can reach every other state in some number of steps. For example, if we are in state 3 we can reach state 0 by transitioning through arch 3,0. We can then get to state 1 by arch 0,1 and then to state 2 by transitioning by arch 1,2. One other point to note is that if we are in state 0, we can transition back to state 0 by the arch 0,0. After checking all paths from all pairs we can see that this Markov chain is irreducible.

In the second example (Figure 6.19), this graph is reducible, since there is at least one path (arch 0,11) that will not allow the elements of one sub-chain (consisting of nodes 11, 12, 13, and 14) from connecting to sub-chains 21, 22, 23, and 24.

click to expand
Figure 6.19: Reducible transition diagram.

We will generally be interested in the behavior of processes that can be represented by irreducible chains, since they are more easily solved and equilibrium can be achieved or assumed in such systems.

Another property of interest in Markov chains is the concept of ergotic chains. A discrete time Markov chain is said to be ergotic if (1) you can get from any state to any other state (i.e., irreducible), (2) for each of these states there are paths of various lengths back to that state (i.e., not all multiples of the same integer [aperiodic]), (3) upon leaving the state you will return with probability 1 within a finite mean time (positive recurrent). This last property implies the first, in that a path must exist and it must visit at most all of the arches.

A Markov chain is said to have a stationary distribution if:

(6.47) 

and if there is a vector such that:

(6.48) 

with

(6.49) 

and

(6.50) 

Equivalently:

(6.51) 

For an ergotic Markov chain, the limit:

(6.52) click to expand

always exists and forms a stationary probability distribution that is independent of the initial state:

(6.53) 

The limiting distribution is the unique solution to the equations:

(6.54) 

Furthermore, for each state:

(6.55) 

where mi is the mean recurrence time for state i, the mean number of steps taken to return to the state after leaving.

Example: Communication system

What is the limiting probability that a 0 entered into the first stage is output as a 0 from the nth stage as lim n (Figure 6.20)?

click to expand
Figure 6.20: State diagram.

  1. Balance equation:

    rate entering = rate leaving

    (6.56) 

  2. Sum of probabilities:

    0 + 1 = 1

Hence:

which is a stationary distribution.



 < Free Open Study > 



Computer Systems Performance Evaluation and Prediction
Computer Systems Performance Evaluation and Prediction
ISBN: 1555582605
EAN: 2147483647
Year: 2002
Pages: 136

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net