41.

[Cover] [Contents] [Index]

Page 134

positive weight (generally a value of 1) is used, while negative connections (called the competition parameter, ) exist to every other neurone in F2, as illustrated in Figure 3.15b. The value is subject to the following constraint:

(3.44)

where m2 is the total number of neurones in the F2 layer.

The learning strategy adopted by the ART model is that of the competitive learning, as described in Section 3.2. In the initial state, all of the weights gij are set to 1.0, as described by Carpenter and Grossberg (1987a), while the weights hij have to obey the restriction that hij must be greater than 0 and no bigger than (α+m1)−1, where a is a small positive constant and m1 is the total number of neurones in layer F1. In general, for simplicity, the weights hij can all be set to:

(3.45)

However, an alternative is to let the hij decrease following their index order. Therefore, for example, the weights from the first neurone in F1 to all the neurones in F2, denoted by hi1, i, will have the following relationship:

(3.46)

The effect of weight setting using either Equation (3.45) or Equation (3.46) is considered in Section 3.5.2.

During the learning process, a pattern vector X, X={x1, x2,…, xn} is presented, and is converted to activity in the layer F1 and combined with linking weights hij to feed into the F2 layer in terms of:

(3.47)

where si is the input for neurone i in layer F2. An iterative process is then carried out to update the activity of each neurone in the F2 layer (so that neurones start to compete with each other) until finally only one winning neurone uwin is identified:

(3.48)

where ui is the activity for the neurone i in F2, and is computed by

[Cover] [Contents] [Index]


Classification Methods for Remotely Sensed Data
Classification Methods for Remotely Sensed Data, Second Edition
ISBN: 1420090720
EAN: 2147483647
Year: 2001
Pages: 354

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net