C++ Neural Networks and Fuzzy Logic by Valluru B. Rao M&T Books, IDG Books Worldwide, Inc. ISBN: 1558515526 Pub Date: 06/01/95 |

Previous | Table of Contents | Next |

Application to Nonlinear Optimization

Nonlinear optimization is an area of operations research, and efficient algorithms for some of the problems in this area are hard to find. In this chapter, we describe the traveling salesperson problem and discuss how this problem is formulated as a nonlinear optimization problem in order to use neural networks (Hopfield and Kohonen) to find an optimum solution. We start with an explanation of the concepts of *linear*, *integer linear* and *nonlinear* optimization.

An optimization problem has an **objective** function and a set of *constraints* on the variables. The problem is to find the values of the variables that lead to an optimum value for the objective function, while satisfying all the constraints. The **objective** function may be a linear function in the variables, or it may be a nonlinear function. For example, it could be a function expressing the total cost of a particular production plan, or a function giving the net profit from a group of products that share a given set of resources. The objective may be to find the minimum value for the objective function, if, for example, it represents cost, or to find the maximum value of a profit function. The resources shared by the products in their manufacturing are usually in limited supply or have some other restrictions on their availability. This consideration leads to the specification of the constraints for the problem.

Each constraint is usually in the form of an equation or an inequality. The left side of such an equation or inequality is an expression in the variables for the problem, and the right-hand side is a constant. The constraints are said to be linear or nonlinear depending on whether the expression on the left-hand side is a linear function or nonlinear function of the variables. A *linear programming problem* is an optimization problem with a *linear* objective function as well as a set of *linear* constraints. An *integer linear* programming problem is a linear programming problem where the variables are required to have integer values. A *nonlinear optimization problem* has one or more of the constraints nonlinear and/or the objective function is nonlinear.

Here are some examples of statements that specify objective functions and constraints:

**•**Linear objective function: Maximize*Z*= 3*X*_{1}+ 4*X*_{2}+ 5.7*X*_{3}**•**Linear equality constraint: 13*X*_{1}- 4.5*X*_{2}+ 7*X*_{3}= 22**•**Linear inequality constraint: 3.6*X*_{1}+ 8.4*X*_{2}- 1.7*X*_{3}≤ 10.9**•**Nonlinear objective function: Minimize*Z*= 5*X*^{2}+ 7*XY*+*Y*^{2}**•**Nonlinear equality constraint: 4*X*+ 3*XY*+ 7*Y*+ 2*Y*^{2}= 37.6**•**Nonlinear inequality constraint: 4.8*X*+ 5.3*XY*+ 6.2*Y*^{2}≥ 34.56

An example of a linear programming problem is the **blending problem**. An example of a blending problem is that of making different flavors of ice cream blending different ingredients, such as sugar, a variety of nuts, and so on, to produce different amounts of ice cream of many flavors. The objective in the problem is to find the amounts of individual flavors of ice cream to produce with given supplies of all the ingredients, so the total profit is maximized.

A nonlinear optimization problem example is the **quadratic programming problem**. The constraints are all linear but the objective function is a quadratic form. A quadratic form is an expression of two variables with 2 for the sum of the exponents of the two variables in each term.

An example of a quadratic programming problem, is a simple investment strategy problem that can be stated as follows. You want to invest a certain amount in a growth stock and in a speculative stock, achieving at least 25% return. You want to limit your investment in the speculative stock to no more than 40% of the total investment. You figure that the expected return on the growth stock is 18%, while that on the speculative stock is 38%. Suppose G and S represent the proportion of your investment in the growth stock and the speculative stock, respectively. So far you have specified the following constraints. These are linear constraints:

G + S = 1 | This says the proportions add up to 1. |

S ≤ 0.4 | This says the proportion invested in speculative stock is no more than 40%. |

1.18G + 1.38S ≥ 1.25 | This says the expected return from these investments should be at least 25%. |

Now the objective function needs to be specified. You have specified already the expected return you want to achieve. Suppose that you are a conservative investor and want to minimize the variance of the return. The variance works out as a quadratic form. Suppose it is determined to be:

2G^{2}+ 3S^{2}- GS

This quadratic form, which is a function of G and S, is your objective function that you want to minimize subject to the (linear) constraints previously stated.

It is possible to construct a neural network to find the values of the variables that correspond to an optimum value of the **objective** function of a problem. For example, the neural networks that use the *Widrow-Hoff learning rule* find the minimum value of the **error** function using the *least mean squared error*. Neural networks such as the feedforward backpropagation network use the *steepest descent* method for this purpose and find a local minimum of the error, if not the global minimum. On the other hand, the Boltzmann machine or the Cauchy machine uses statistical methods and probabilities and achieves success in finding the global minimum of an **error** function. So we have an idea of how to go about using a neural network to find an optimum value of a function. The question remains as to how the constraints of an optimization problem should be treated in a neural network operation. A good example in answer to this question is the *traveling salesperson problem*. Let’s discuss this example next.

Previous | Table of Contents | Next |

Copyright © IDG Books Worldwide, Inc.

C++ Neural Networks and Fuzzy Logic

ISBN: 1558515526

EAN: 2147483647

EAN: 2147483647

Year: 1995

Pages: 139

Pages: 139

Authors: Valluru B. Rao, Hayagriva Rao

Similar book on Amazon

flylib.com © 2008-2017.

If you may any questions please contact us: flylib@qtcs.net

If you may any questions please contact us: flylib@qtcs.net