Bigram models (a type of Hidden Markov Model) are the topic of Chapter 10, which are networks that include states and transitions with associated probabilities. The outcome, or observation, of a state is generated based upon the associated probability distribution. The action is performed and made visible, though the internal state is hidden, thus the hidden aspect of the Markov model. Hidden Markov models have a variety of applications; in Chapter 10 we look at the generation of meaningful text based upon training by a corpus . The HMM and a text generation example is provided on the CD-ROM at ./software/ch10 .