Sketching Possible Options

Before continuing any further, it's necessary to brainstorm feasible options for all aspects of the problem, attempting to keep a wide range of choices. Each alternative need not be precisely defined; just a sketch will do.

Context

The main thing to consider is how to model the animat's state in space. Both position and orientation are important concepts when it comes to movement. Keep in mind, however, that neither may be explicitly required by the AI! The animat does not necessarily need to know its position or orientation to perform well, because it can rely on perceptions from the environment. Indeed, the context implicitly affects both the inputs and outputs provided to the animat encoded in a different format, as depicted in Figure 8.1.

Figure 8.1. An agent and an object, represented with both absolute coordinates, where the world has the reference origin (top), and relative coordinates, where the agent is the reference origin (bottom).

graphics/08fig01.gif

Both position and orientation attributes can be encoded in a different fashion. This is purely an AI design issue, because the engine could store this information differently.

  • Absolute vectors are expressed in terms of global coordinates, matching the axis of the world.

  • Relative coordinate systems are based on each particular animat's experiences, and do not generally correspond together.

Additionally, different formats may be chosen to represent each of these quantities, as mentioned in Chapter 5, "Movement in Game Worlds":

  • Discrete values can be used to represent the abstract quantities in a variably coarse fashion. For example, a north/south/east/west approach uses only four values for movement or turns.

  • Continuous variables are not noticeably limited to finite values, and can accurately represent unrestricted domains.

Similar properties can be applied to the senses and actions (see Figure 8.2), as well as the underlying concepts.

Figure 8.2. Different ways of modeling the actions.

graphics/08fig02.gif

Actions

The motor actions required for movement can be designed at different levels of abstraction; for example, does the AI request movement step by step, or as full paths? Lower levels of abstraction present simple commands such as "turn left" or "step forward." These may be provided explicitly as actual functions, or more implicitly by taking parameters. A high level of abstraction would offer actions such as "move to" and "turn toward."

It may be possible for the animat to do without the option of turning. This can either be implicit in the move functions (turn toward that direction automatically), or we can assume a constant orientation. This second option wouldn't be as realistic, but much simpler.

As you can see in Figure 8.2, discrete moves have limited directions and step sizes (top left), and discrete turns are restricted to 90-degree rotations (top right). Continuous steps instead have any magnitude or direction (bottom right), and continuous turns can rotate by arbitrary angles (bottom left).

Naturally, there is a trade-off between simplicity of implementation (on the engine side) and ease of use (by the AI) so there may be more code on either side of the interface.

Choosing this level of abstraction is similar to the design decision about the commands given to human players. In first-person control, a balance between flexibility of control and intuitiveness is generally found. The movement interface between the engine and the AI can theoretically be handled in the same fashion as the human players, which actually reduces the work needed to integrate the AI into the game.

Senses

As previously emphasized, a sense of free space is needed for the animat to perform navigation. If a simplified version of the environment is not provided before the game starts, the animat can acquire it online with two different queries:

  • Point content functions can return the type of matter at a specific location. It can be empty, solid, or contain some form of liquid, for example.

  • Line traces can be used to find the first intersection of a segment with the structure world. This implicitly enables us to identify areas of free space.

The animat can decide where to go based on this information, but more elaborate queries of the environment can be devised to simplify its task (that is, ones based on the expected movement of the animat). On the other hand, much simpler sensors can be used instead, or even in conjunction with the ones just listed.

  • Collision detectors can indicate when the animat has hit an obstacle of any description (walls, ledges, or other creatures).

  • Odometers can be used to measure the distance travelled relatively to the current state.

  • Other such proprio-sensors (based on the Latin proprio, the self; sensors that detect one's own state) allow relative angular movement to be monitored.

It is possible to understand the layout of the environment just using contact sensors and tracking the position (see Figure 8.3); terrain learning is required for collision prevention. However, this task is less realistic because it relies on trial and error rather than prediction, because the animat finds out about obstacles only when it is too late.

Figure 8.3. Three different kinds of sensors to allow the animats to sense their environment, using point and line queries or even collision indicators.

graphics/08fig03.gif

Historically, game AI developers just used relative movement to determine collisions; if the creature hasn't gone forward, an obstacle must have been hit. This is a consequence of poor interface design rather than efficiency; the physics engine already knows about collisions, but hacks have to be used by the AI to determine it again. A better interface would pass this existing information from the physics system to the AI.



AI Game Development. Synthetic Creatures with Learning and Reactive Behaviors
AI Game Development: Synthetic Creatures with Learning and Reactive Behaviors
ISBN: 1592730043
EAN: 2147483647
Year: 2003
Pages: 399

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net