Original Draft

The type of behavior we're interested is containment [Reynolds99], although the algorithm in this section borrows ideas from specialized obstacle avoidance, which in turn is based on seeking (all from the same white paper). Listing 10.3 shows the pseudo-code for steering a creature within an arbitrarily complex environment. The algorithm uses the normal at the collision point to guess where free space is. A steering force is applied and turned toward that.

Listing 10.3 Generalized Obstacle-Avoidance Algorithm
 function avoid_obstacles       project the position in the future using the velocity       if a collision is estimated             find empty location by projecting away from collision point             compute the desired turn to seek that free space       end if       if the obstacle is within a critical distance             proportionally determine breaking force (slow down or stop)       end if       apply steering and breaking forces end function 

To implement this algorithm, we rely on the seek behavior discussed in the previous section. In the preceding algorithm, we use seeking to steer toward free space when predicting a collision (see Figure 10.1).

Figure 10.1. Generalized obstacle avoidance predicts the position in the future, and uses the normal of the obstacle to find an area in free space.

graphics/10fig01.gif

The application of steering and breaking forces is not problematic because our interface outputs (turn and move) support these actions. It's not entirely a pleasant coincidence, because such commands are commonplace, but some insight (and research) during the design always helps save time!

Instead, the challenge lies in the inputs, which must be used to determine future collisions. The algorithm relies on knowledge of the environment, performing intersection tests with the known obstacles. The normal at the collision point (that is, direction the obstacle is facing) is then required to find an empty spot in space nearby. In some cases, this is not suitable for the implementation and can lead to problems in the behaviors (for instance, turning in the wrong direction).

Updated Solution

For an embodied animat, the algorithm can be improved by gathering knowledge of the surroundings, which comes from its senses (instead of relying on normals). Fortunately, the AI can determine future collisions implicitly by scanning free space. Our existing specification allows this by tracing lines through empty space, as our animat would do if it were to move in that direction.

The normals returned by the queries are ignored, which can be an advantage; using the obstacles normals to find free space is a rough estimate, and the likelihood of success diminishes as the complexity of the environment increases. Instead, we can use the senses to find an alternative heading, from which a turning force can be derived.

In Reynolds's demo, random wandering behavior is engaged when no collisions need to be avoided. It might be interesting to include a more preventive course of action, whereby we step away from side obstacles. We can do this by checking the side sensors, and strafing laterally to shy away from potential problems (see Figure 10.2). Listing 10.4 shows the new algorithm.

Figure 10.2. Avoiding obstacles using the sensors to detect areas of free space instead of relying on the normals of the obstacles.

graphics/10fig02.gif

Listing 10.4 Updated Obstacle Avoidance Algorithm That Can Deal with the Sensors Provided to the Animat
 function avoid_obstacles2       check the senses for free space in front, left and right       if a frontal collision is projected             find the furthest obstacle on left or right             determine the best side to turn towards             compute the desired turn to seek that free space       end if       if the front obstacle is within a critical distance             proportionally determine breaking force (slow down or stop)       end if       if there is an obstacle on the left side             proportionally adjust the steering force to step right       end if       if there is an obstacle on the right side             proportionally adjust the steering force to step left       end if       apply steering and breaking forces end function 

Implementing this algorithm involves just translating the statements into C++ code. Suitable default values for each of the parameters need to be chosen (for instance, distances, weight coefficients). These will no doubt need to be fine-tuned in the short experimentation phase.

Enhancements

A simple way to improve the behavior is to make it more purposeful. (It can be a problem with reactive AI.) The following behaviors may not be the epitome of purposefulness, but they are certainly a lot better than bouncing off walls!

Wandering

When there are no obstacles, the wandering behavior attempts to move around randomly. Because selecting a new steering force every frame leads to erratic behaviors, the AI need to maintain consistency in the steering forces over time.

One great way to maintain consistency is to use a deterministic but almost unpredictable function. There is one called Perlin noise, which is a combination of parametric noise functions at different frequencies [Elias98, Perlin99, Perlin01]. It's fully deterministic, so it can be smoothed, but is complex enough for the patterns to be unpredictable. We can use the noise function to determine the steering force as a function of time.

Alternatively, we can add random values to an accumulator variable (akin to Brownian motion). To determine the steering force, we can then use a sine function, so the turns will be both positive and negative at equal probabilities. The trends in the movement are no longer erratic, yet they are still random.

Forced Exploration

The problem with purely reactive steering behaviors such as obstacle avoidance is that they don't have much direction. It's desirable to prevent the animat from turning back on itself unless absolutely necessary.

A good way to do this is to keep track of a few past positions. A steering behavior can then actively flee away from them. To implement this with little memory, we can use only one vector. The idea is to use one orientation to points toward the area the animat just came from.

This can be implemented easily using a moving average. Assume that there's a vector pointing to many previous positions (provenance). We want to update the vector to take into account the last position:

 
 provenance = previous * 0.1 + provenance * 0.9 
 

The coefficients (a = 0.1 and b = 0.9) essentially affect how long we want to remember the past headings. These can be changed at will as long as a + b = 1. Small values of a mean the previous heading will be insignificant compared to the history, whereas large values of a mean we'll take the previous heading into account more than the provenance.

Projecting Targets

A final improvement tip involves randomly projecting targets as far ahead as possible. If we consider the animat as a dog, this would be like throwing a virtual ball. After the ball has been collected, it is thrown again.

This can be achieved by looking ahead in a random direction. When a good place to project the target is found (far enough), the animat seeks that point. This gives the movement a very strong sense of persistence and coherence, and works surprisingly well. However, the AI needs to watch out for inaccessible targets, and select new ones if obstacles are encountered along the way.



AI Game Development. Synthetic Creatures with Learning and Reactive Behaviors
AI Game Development: Synthetic Creatures with Learning and Reactive Behaviors
ISBN: 1592730043
EAN: 2147483647
Year: 2003
Pages: 399

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net