Single Layer Perceptrons


A single layer perceptron, or SLP, is a connectionist model that consists of a single processing unit. Each connection from an input to the cell includes a coefficient that represents a weighting factor. Each connection is specified by a weight w i that specifies the influence of cell u i on the cell. Positive weights indicate reinforcement and negative weights indicate inhibition. These weights, along with the inputs to the cell, determine the behavior of the network. See Figure 5.2 for a simple diagram of an SLP.

click to expand
Figure 5.2: Single layer perceptron.

From Figure 5.2 we see that the cell includes three inputs ( u 1 , u 2 , and u 3 ). A bias input ( w ) is provided, which will be discussed later. Each input connection also includes a weight ( w 1 , w 2 , and w 3 ). Finally, a single output is provided, O . The cell that represents our function is defined as ³ , and is shown in Equation 5.1.

(5.1)  

The equation shown in Equation 5.1 is simply a function that sums the products of the weights and inputs, finally adding in the bias. The output is then provided to an activation function, which can be defined as shown in Equation 5.2.

(5.2)  

Or, simply, whenever the output is greater than zero, the output is thresholded at 1. If the output is less than or equal to zero, the output is thresholded at -1.

Modeling Boolean Expressions with SLP

While the SLP is a very simple model, it can be very powerful. For example, the basic digital logic gates can easily be constructed as shown in Figure 5.3.

click to expand
Figure 5.3: Logic gates built from single layer perceptrons.

Recall that an AND gate emits a '1' value if both inputs are '1', otherwise a '0' is emitted . Therefore, if both inputs are set ( u vector of [1, 1]) and using the activation function from Equation 5.2 as threshold , we get:

  • ³ = bias + u 1 w 1 + u 2 w 2 , or

  • 1 = threshold( -1 + (1 * 1) + (1 * 1))

Now let's try a u vector of [0, 1]:

  • ³ = bias + u 1 w 1 + u 2 w 2 , or

  • -1 = threshold( -1 + (0 * 1) + (1 * 1))

As both examples show, the simple perceptron model correctly implements the logical AND function (as well as the OR and NOT functions). A digital logic function that the SLP cannot model is the XOR function. The inability of the SLP to solve the XOR function is known as the separability problem . This particular problem was exploited by Marvin Minsky and Seymour Papert to all but destroy connectionist research in the 1960s (and support their own research in traditional symbolic AI approaches) [Minsky and Papert 1969].

The separability problem was easily resolved by adding one or more layers between the inputs and outputs of the neural network (see an example in Figure 5.4). This led to the model known as multiple-layer perceptrons (or MLP).

click to expand
Figure 5.4: Multiple-layer perceptron (multi-layer network).



Visual Basic Developer
Visual Basic Developers Guide to ASP and IIS: Build Powerful Server-Side Web Applications with Visual Basic. (Visual Basic Developers Guides)
ISBN: 0782125573
EAN: 2147483647
Year: 1999
Pages: 175

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net