Sketching Possible Options

This chapter complements the work done to specify the weapon handling in the previous part. Although both relate to combat, Chapter 15, "Shooting, Formally," focused on aiming and firing, whereas this part focuses on weapon selection. Some overlap certainly exists in the specification, so incompatibilities must be reduced by reusing the existing model wherever possible.

The next few pages discuss the three aspects of the task that need to be specified: the inputs (that is, information required), the outputs (that is, possible actions), and the context (implicit variables that affect the problem).

Context

The weapon model is the most important aspect of the context. How detailed must the design of each weapon be for the artificial intelligence (AI) to be able to choose one? Once again, instead of exposing the complexity of the 3D model to the AI, a more abstract alternative is suitable: The weapon is considered as a symbol.

The most straightforward approach is to represent the weapon type as a symbol. The AI would include references such as "machine gun," or "rocket launcher." Using the type of weapons is generally not a problem in first-person shooters; either the inventories contain only one weapon of each type, or it doesn't matter which instance is used.

This is unlikely to be a problem, but it's easy to disambiguate them; each weapon instance can have a distinct symbol, such as "blaster1" and "blaster2." This may be appropriate in role-playing games, where the inventory is more important.

When using this symbolic approach to deal with the weapons, the 3D models will still be animated. A lower layer is relied upon to translate from our abstract representation (that is, perform gestures). This is done commonly when human players request weapon changes, too.

Senses

Three different aspects of the environment are required to select weapons, each with multiple possible models.

Environment

When selecting a weapon instead of just shooting, more information is required about the situation. Indeed, only localized terrain information is required when predicting movement, because the direction of travel is usually obvious. This means it's possible to focus on a small part of the environment. Weapon selection is a higher-level process; the decision lasts longer and has repercussions on the shooting behavior. To this extent, a broader view of the environment is required.

As well as the line-of-sight queries used to create the shooting capability, properties such as spatial constriction (that is, how much room there is to maneuver) around the player and the enemy are important factors. Letting the AI know these important factors will allow it to make more informed decisions.

To model the interface, we can use a direct approach, offering a function such as GetSpatialRestriction() to the animats. This function would return a high value for highly restricted areas, and a lower value in open environments. Alternatively, the result of the function could be simpler to understand as the average distance to an obstacle.

In contrast, we could let the animat sense the spatial restriction using visual queries: Line sensors can already return distances (like precise lasers). Estimating the amount of restriction may be less efficient using line traces, so a hybrid combination with the previous high-level approach may be more appropriate.

Player State

The personal state is more obvious to handle. All we need is a query for health, armor, and the current weapon. This information is typically displayed to the human player by a head-up display (HUD), so it's readily available in the engine for the AI to query.

However, it isn't as easy to determine the health of enemies (or at least shouldn't be). Conceptually, directly querying information about other players violates the principles of embodiment; most players would call it "cheating." As game AI developers, the interest in embodiment is due to the emergent behaviors and weapon decisions that arise from not knowing the enemy's state.

It is possible to monitor the state of other players in various indirect ways, some of which benefit from being biologically plausible. One of the simplest, and arguably the most efficient, is an event-driven approach. When a projectile is fired, the players nearby perceive a sound. Therefore, the animat knows which weapon the enemy used. Likewise, when a sound of pain is emitted or blood splashes, the event can be used to decrease the health estimate.

It's also interesting to note that an animat keeping track of the enemy's state needs internal variables and is no longer truly reactive although sensing and acting upon personal state is still reactive. Although this is not a problem because simple enemy models are easily dealt with, it's still important to be aware of this.

Weapon Properties

The properties of a weapon (for instance, reload times and firing rate) can be used in the decision, too. There are three different ways of handling them:

  • Declarative Each of the characteristics of the weapons can be stored as a set of rules and accessed by the brain of the animat. The benefit is separation of data and code. The rules used by the AI may be independent from the game logic, although the AI often benefits from having accurate facts.

  • Implicit The easiest way to handle the properties of a weapon (from the specification's point of view) is not to mention them. Each weapon property has an implicit effect on the environment, so each property can be induced relatively easily (for instance, monitoring weapon readiness as the reload time and counting projectiles per second as the rate of fire).

  • Query Finally, a dedicated interface to the game logic could be devised, which returns the precise weapon properties at runtime.

If any, human players use the implicit approach, because they are rarely told about weapon statistics before the game (or they don't pay attention). Humans can learn the behavior of weapons with practice.

Actions

An interface is necessary to apply the weapon choice. Specifically, the weapon must be ready to use soon after the decision is executed.

The action for selecting a weapon can be a direct selection, using only one procedure call. The model is similar to human players pressing the keys bound to specific weapons (usually the number keys on the top row).

Alternatively, weapons can be selected by cycling through all the items in the inventory. This is a sequence of commands, analogous to using the mouse wheel to select weapons. The advantage is that no direct knowledge of the current weapons is needed; the AI can just cycle through the current inventory, regardless of its content.



AI Game Development. Synthetic Creatures with Learning and Reactive Behaviors
AI Game Development: Synthetic Creatures with Learning and Reactive Behaviors
ISBN: 1592730043
EAN: 2147483647
Year: 2003
Pages: 399

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net