Simulation

As mentioned, the main differences with MLPs are the extra hidden units. The intermediate layer implies that the information required to compute the output is not immediately available. Instead, the first layer must be processed before the second one. Its output then serves as the input for the next, and so forth until the final result is determined. This is a simple iterative process that goes through the entire network.

This process emphasizes the feed-forward structure of perceptrons, and especially MLP. Hidden layers do not affect this property. Listing 19.1 shows some pseudo-code to compute the output for an arbitrary number of layers.

Listing 19.1 Feed-Forward Simulation Algorithm Used to Filter the Inputs Through the MLP
 # the first layer processes the input array current = input for layer from first to last # compute the output of each neuron for each i in [1..neurons] from layer        # multiply arrays together and add up the result           s = NetSum( neuron[i].weights, current )           # store the post-processed result           output[i] = Activate( s )      end for      # the next layer uses this layer's output as input current = output end for 

In practice, this procedure can be used in the same fashion as plain perceptrons; we provide input patterns and collect the corresponding result. The output can be applied to approximate functions, classify patterns, or even control actuators (that is, artificial muscles). The next chapter demonstrates perceptrons in a game situation using function approximation.



AI Game Development. Synthetic Creatures with Learning and Reactive Behaviors
AI Game Development: Synthetic Creatures with Learning and Reactive Behaviors
ISBN: 1592730043
EAN: 2147483647
Year: 2003
Pages: 399

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net