Model Overview

Perceptrons essentially consist in a layer of weights, mapping a set of inputs x = [x1,..., xn] onto a single output y. The arrow above the x denotes this is a vector consisting of multiple numbers. The mapping from input to output is achieved with a set of linear weights connecting the function inputs directly to the output (see Figure 17.2).

Figure 17.2. A perceptron with four inputs and a single output.

graphics/17fig02.gif

Multiple outputs y = [y1,..., yn] can be handled by using the same principle again; another set of weights can connect all the inputs to a different output. All the outputs and weights should be considered and actually are independent. (This, in fact, is one limitation of the perceptron.) For this reason, we will focus on networks with a single output y, which will simplify the explanations.

The weights are denoted w = [w0, w1...wn]; weights 1 through n are connected to the inputs, and the 0th weight w0 = b is unconnected and represents a bias (that is, a constant offset). The bias can also be interpreted as a threshold; if you add the bias, the threshold is 0; otherwise, it's b.

Practical Note

The bias represents a constant offset. As such, we can treat it as a separate value that is not connected to any inputs. In practice, however, it's often easier to include an additional input that remains constant at x0 = 1, connected to the bias w0. This way, the bias can be treated as a normal weight, which simplifies the code slightly!


The choice of data type for the inputs, outputs, and weights has changed over the years, depending on the models and the applications. The options are binary values or continuous numbers. The perceptron initially used binary values (0, 1) for the inputs and outputs, whereas the Adaline allowed inputs to be negative and used continuous outputs. The weights have mostly been continuous (that is, real numbers), although various degrees of precision are used. There is a strong case to use continuous values throughout, as they have many advantages without drawbacks.

As for the data type, we'll be using 32-bit floating-point numbers at the risk of offending some neural network purists. Indeed, 64 bits is a "standard" policy, but in games, this is rarely worth double the memory and computational power; single precision floats perform just fine! We can do more important things to improve the quality of our perceptrons, instead of increasing the precision of the weights (for instance, revise the input/output specification, adjust the training procedure, and so forth).

The next few pages rely on mathematics to explain the processing inside perceptrons, but it is kept accessible. (The text around the equations explains them.) The practical approach in the next chapter serves as an ideal complement for this theoretical foundation.



AI Game Development. Synthetic Creatures with Learning and Reactive Behaviors
AI Game Development: Synthetic Creatures with Learning and Reactive Behaviors
ISBN: 1592730043
EAN: 2147483647
Year: 2003
Pages: 399

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net