Markov models are quite robust. However, there are key assumptions and resulting limitations. The primary ones are highlighted below.
Memoryless Assumption: It is assumed that all the important system information is captured in the state descriptors of a Markov model. That is, simply knowing which state the system is in, uniquely defines all relevant information. This leads to subtleties. For instance, this implies that it doesn't matter how one arrives (i.e., by which path) to a particular state, only that one is currently in the particular state. Previous history of previous states visited is irrelevant when determining which state will be visited next. This assumption also implies that the length of time that the system is in a particular state, by continually taking self loops, is irrelevant. The only thing that is important in determining which state will be visited next is that the system is in a particular state at the current time. This places a nontrivial burden on choosing an appropriate notation for the state descriptor. For instance, if the jobs have different characteristics, this must be evident from the state descriptor. If the order in which jobs arrive to the systems makes a difference, this, too, must be captured in the state descriptor. Knowing the current state alone is sufficient. This is the defining Markov characteristic and any other information is unnecessary as it applies to the system's future behavior. That is, previous history can be forgotten. This explains the term "memoryless" as it applies to Markov models.
Resulting Limitation: Because everything must be captured in the state descriptor, Markov models are susceptible to state space explosion. For example, if there are 10 customers at the CPU, each one unlike any other customer (i.e., a multi-class model) and the CPU is scheduled first-come-first-serve (i.e., where the order of customer arrivals is important), then the state descriptor of the CPU must contain which customer occupies which position in the queue. This leads to 10! = 3,628,800 different possibilities/states for the CPU alone. However, if the customers behave similarly (i.e., a single class model) and the CPU is scheduled round robin with a short quanta (i.e., where the order of customer arrivals is not important), then the state descriptor of the CPU can be represented by a single number (i.e., the number of customers present). Having large state spaces implies additional complexity and a potential loss of accuracy.
Exponential Assumption: Without going into depth, the exponential distribution is the only continuous distribution that is memoryless. For instance, suppose the average amount of CPU time required by a customer is 10 seconds. Knowing that the customer has already received 5 seconds worth of CPU time but not yet finished (i.e., previous history, which is irrelevant under the Markov memoryless assumption), the average amount of CPU time still needed is again 10 seconds. This is analogous to flipping a fair coin. The average number of flips to get a head is two. However, knowing that the first flip resulted in a tail, the average number of flips still needed to get a head is again two. Thus, Markov models assume that the time spent between relevant events, such as job arrival times and job service times, is exponentially distributed.
Resulting Limitation: To mitigate the limitation imposed by exponential assumptions, the concept of phases (or stages) can be introduced. For example, again suppose that the average amount of CPU time required by a customer is 10 seconds. By partitioning the total service requirement into two phases of service (i.e., each phase being exponentially distributed with an average of five seconds), the CPU state for each customer can be decoupled into two states. That is, each customer can be in either its first stage of service or in its second stage of service. This technique opens up a whole host of other distributions (i.e., not simply exponential) that can be closely approximated. However, the price is again a potential state space explosion since the state descriptors must now contain this additional phase information.
Theoretically, Markov models can be constructed to any desired level of accuracy. The price is the complexity of the state descriptors and the resulting size of the state space. Practically, beyond a few thousand states, the computational complexity is limiting and any intuitive benefits are lost. Thus, the system analyst is faced with a tradeoff of determining an acceptable level of aggregation that affords both accuracy and efficiency.