Probabilistic State Machines

So far, all the models have in fact been deterministic. Even the nondeterministic ones turned out just to be a more compact deterministic model. In games, a bit of variation is often needed; voluntarily random behaviors can have a positive effect on the realism.

By assigning probabilities to the transitions or outputs, we can use the finite-state machines to estimate the likelihood of an input sequence occurring, or more commonly to generate random sequences of outputs.

Definitions

In both the case of probabilistic finite-state automaton and machines, there is a strong correlation with other fields of mathematics notably statistics. We'll present this angle too, because this discussion benefits from a comprehensive background.

Markov Chains

A probabilistic finite-state automaton (PFSA) can be seen as a Markov chain. A Markov chain is a sequence of random events known formally as a stochastic process with a particular property; the likelihood of an event occurring next depends entirely on the past sequence. A Markov chain of degree one expresses that only one past event is needed to determine the probability of the next event; this is a state-determined Markov chain. A Markov chain of high order is called a history-determined Markov chain, relying on more past events to model the probability of the next event.

We're interested in Markov chains of degree one, because they can be seen as finite-state automata with probabilities associated to each of the transitions (see Figure 40.3). Sticking with definitions similar to the previous finite-state automata, consider the following:

Figure 40.3. A probabilistic finite-state automaton and a simpler finite-state machine.

graphics/40fig03.gif

graphics/552equ01.gif


Once again, S is the input alphabet, Q is the set of states, and F is the subset of accepting states. d is still the transition function, but associates state/input pairs with the next state and a probability:

graphics/553equ01.gif


This is subject to a condition applying to every state qt. The sum of the probabilities p(qt+1) (of the outgoing transitions) for all the possible inputs st should be equal to 1. If this is not the case, the probabilities can be renormalized (that is, divide by the sum):

graphics/553equ02.gif


Most often, d remains a function extending the deterministic FSA model. However, it can become a relation if necessary, where multiple transitions from a state would be possible for one input. It wouldn't be any additional problem to handle these with the simulation or representation.

Hidden Markov Models

A probabilistic finite-state machine can be interpreted as a hidden Markov model. A hidden Markov model (HMM) is like a Markov chain in that it has probabilistic transitions. However, an HMM is a more complex model because the outputs are stochastic, too. Formally speaking, a probabilistic finite-state machine can be defined as follows:

graphics/553equ03.gif


Again, most of the terms are identical to Markov chains. As a reminder, Z is the output alphabet, and l is the output relation. (It's a relation because there is more than one possible output per state/input pair.) Because the definition of d was extended to handle probabilities, we need to do the same for l:

graphics/554equ01.gif


Here also, the sum of the probabilities needs to be 1, for each of the possible pairs (qt,st). This ensures consistency in the probabilities (which can also be enforced by normalization):

graphics/554equ02.gif


Given this definition, we can use a variety of algorithms to extract information from it. It's also possible to create some sample outputs given the model.

Algorithms

There are two principal ways to traverse probabilistic finite-state models: one for evaluating probabilities, the other for generating pseudo-random symbols.

Evaluation

There is an algorithm to compute the probability of a given sequence occurring. The animats can use this technique to evaluate their chances of success, for example. Taking each of the input symbols, the probabilities for each transition can be multiplied together to determine the total probability of this sequence occurring.

Generation

It's also possible to perform a random traversal of the graph, taking each transition according to its probability. Randomized NPC behaviors with a common pattern can be generated this way. When the probabilities are respected, the sequences generated are a sample distribution of the Markov chain (that is, representative of all the possible sequences). So if we were to gather the statistics to compute the probabilities, we would get the same result.

Discussion

Probabilistic state machines are extremely simple to model and implement. However, they can provide very interesting patterns simply thanks to a stochastic traversal. This is ideal to influence characters to perform in one particular way, while allowing them to diverge occasionally. For the designer, this proves to be an extremely powerful control method.

One disadvantage is that random behaviors aren't very predictable! This can be awkward when debugging or trying to develop the finite-state machine. Probabilistic state machines are also subject to the same disadvantages of the standard models, notably limitations in capabilities and complexity of large models.



AI Game Development. Synthetic Creatures with Learning and Reactive Behaviors
AI Game Development: Synthetic Creatures with Learning and Reactive Behaviors
ISBN: 1592730043
EAN: 2147483647
Year: 2003
Pages: 399

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net