The type of behavior we're interested is
[Reynolds99], although the algorithm in this section borrows ideas from specialized o
is based on
(all from the same white paper). Listing 10.3 shows the pseudo-code for steering a creature within an arbitrarily complex environment. The algorithm uses the normal at the collision point to guess where free space is. A steering force is applied and turned toward that.
Listing 10.3 Generalized
function avoid_obstacles project the position in the future using the velocity if a collision is estimated find empty location by
away from collision point compute the desired turn to seek that free space end if if the obstacle is within a critical distance proportionally determine breaking force (slow down or stop) end if apply steering and breaking forces end function
To implement this algorithm, we rely on the seek behavior -discussed in the previous section. In the
algorithm, we use seeking to steer toward free space when predicting a collision (see Figure 10.1).
Figure 10.1. Generalized obstacle avoidance predicts the position in the future, and uses the normal of the obstacle to find an area in free space.
The application of steering and breaking forces is not
because our interface outputs (turn and move) support these actions. It's not entirely a pleasant
, because such commands are commonplace, but some insight (and research) during the design always helps save time!
Instead, the challenge lies in the inputs, which must be used to determine future collisions. The algorithm relies on knowledge of the environment, performing intersection tests with the known obstacles. The normal at the collision point (that is, direction the obstacle is
) is then required to find an empty spot in space nearby. In some cases, this is not suitable for the implementation and can lead to problems in the behaviors (for instance, turning in the wrong direction).
For an embodied animat, the algorithm can be improved by gathering knowledge of the surroundings, which comes from its senses (instead of relying on normals). Fortunately, the AI can determine future collisions implicitly by scanning free space. Our existing specification allows this by tracing lines through empty space, as our animat would do if it were to move in that direction.
The normals returned by the queries are ignored, which can be an advantage; using the obstacles normals to find free space is a rough estimate, and the
of success diminishes as the complexity of the environment
. Instead, we can use the senses to find an alternative heading, from which a turning force can be derived.
In Reynolds's demo, random wandering behavior is engaged when no collisions need to be avoided. It might be interesting to include a more preventive course of action, whereby we step away from side obstacles. We can do this by checking the side sensors, and strafing laterally to shy away from potential problems (see Figure 10.2). Listing 10.4 shows the new algorithm.
Figure 10.2. Avoiding obstacles using the sensors to detect areas of free space instead of relying on the normals of the obstacles.
Listing 10.4 Updated Obstacle Avoidance Algorithm That Can Deal with the Sensors Provided to the Animat
function avoid_obstacles2 check the senses for free space in front, left and right if a
find the furthest obstacle on left or right determine the best side to turn towards compute the desired turn to seek that free space end if if the front obstacle is within a critical distance proportionally determine breaking force (slow down or stop) end if if there is an obstacle on the left side proportionally adjust the steering force to step right end if if there is an obstacle on the right side proportionally adjust the steering force to step left end if apply steering and breaking forces end function
Implementing this algorithm involves just translating the statements into C++ code. Suitable default values for each of the parameters need to be
(for instance, distances, weight coefficients). These will no doubt need to be fine-
in the short experimentation phase.
A simple way to improve the behavior is to make it more purposeful. (It can be a problem with reactive AI.) The following behaviors may not be the epitome of purposefulness, but they are
a lot better than bouncing off walls!
When there are no obstacles, the wandering behavior attempts to move around
. Because selecting a new steering force every frame leads to erratic behaviors, the AI need to maintain consistency in the steering forces over time.
One great way to maintain consistency is to use a deterministic -but almost unpredictable -function. There is one called
, which is a combination of parametric noise functions at different frequencies [Elias98, Perlin99, Perlin01]. It's fully deterministic, so it can be smoothed, but is complex enough for the patterns to be unpredictable. We can use the noise function to determine the steering force as a function of time.
Alternatively, we can add random values to an
variable (akin to Brownian motion). To determine the steering force, we can then use a sine function, so the turns will be both positive and negative at equal probabilities. The trends in the movement are no longer erratic, yet they are still random.
The problem with purely reactive steering behaviors -such as obstacle avoidance -is that they don't have much direction. It's desirable to prevent the animat from turning back on itself unless
A good way to do this is to keep track of a few past
. A steering behavior can then actively flee away from them. To implement this with little memory, we can use only one vector. The idea is to use one orientation to points toward the area the animat just came from.
This can be implemented easily using a
. Assume that there's a vector pointing to many previous positions (
). We want to update the vector to take into account the last position:
provenance = previous * 0.1 + provenance * 0.9
The coefficients (a = 0.1 and b = 0.9)
affect how long we want to remember the past headings. These can be changed at will as long as a + b = 1. Small values of
mean the previous heading will be insignificant compared to the history, whereas large values of
mean we'll take the previous heading into account more than the provenance.
A final improvement tip involves randomly projecting targets as far ahead as possible. If we consider the animat as a dog, this would be like throwing a virtual ball. After the ball has been collected, it is thrown again.
This can be achieved by looking ahead in a random direction. When a good place to project the target is found (far enough), the animat seeks that point. This gives the movement a very strong sense of persistence and
, and works surprisingly well. However, the AI needs to watch out for inaccessible targets, and select new ones if obstacles are
along the way.