Part VII: Action Selection

This is the final practical part, bringing together the best of all the previous chapters under the control of an adaptive AI. Many of the existing capabilities and behaviors were designed as standalone components, but can be reused in more complex architectures. The result is an animat capable of learning to play a game, one which most game producers would be proud to include in their engine.

Motivation

The overall aim of this part is to produce competent deathmatch behaviors based on each of the existing components. Fixed reactive behaviors are very satisfactory, but adaptive behaviors will keep the players on the edge of their seats.

Each of the capabilities developed so far have been relatively narrow, only capable of performing specific tasks. This part brings each of them together with a top-level component in the architecture. We'll show how (mostly) reactive components become very capable when combined.

As usual, the first prototype does not learn the behaviors, instead allowing the designer to specify the animat's strategies. The second prototype uses an adaptive learning technique. Both these models have advantages and pitfalls, and are compared with each other.

Outline

Chapter 44, "Strategic Decision Making." The first objective is to explain deathmatch strategies and analyze the roles of the environment and game engine. Then, we reintroduce decision making in the concept of high-level tactics.

Chapter 45, "Implementing Tactical Intelligence." Reusing capabilities from previous chapters develops a set of default tactical behaviors. A subsumption architecture is designed to integrate each of the behaviors in a coherent fashion. This provides a system that is easily controlled by the designers and is therefore predictable.

Chapter 46, "Reinforcement Learning." The theory behind reinforcement learning is introduced in this chapter to remedy the limitations of the previous prototype and introduce adaptability at the same time. This chapter explains different algorithms applicable to game development in depth.

Chapter 47, "Learning Reactive Strategies." In this chapter, the reinforcement algorithms are applied to creating adaptive deathmatch behaviors. This is done in a modular fashion by decomposing the strategies into capabilities instead of reusing default tactics. This approach provides more flexibility and power for the learning.

Chapter 48, "Dealing with Adaptive Behaviors." Building on previous chapters that explain how to design learning AI, this last chapter in this part explains the most challenging problem: how to deal with adaptive AI that learns within the game. Tips and tricks are presented, as well as traps to avoid.

Assumptions

Although the technical requirements from the other parts still persist, the assumptions here are of a much higher level closer to the nature of the game itself:

  • The purpose of the game has been described using game logic and can be interpreted by the animats.

  • Each animat has a set of criteria expressed as moods that determine what they want to achieve in the game.

  • The animats can compete against each other in the game, for training and evaluation purposes.

  • Basic capabilities are available and can be customized to produce different variations at will.

The first chapter in this part discusses these issues in greater depth during the analysis phase.



AI Game Development. Synthetic Creatures with Learning and Reactive Behaviors
AI Game Development: Synthetic Creatures with Learning and Reactive Behaviors
ISBN: 1592730043
EAN: 2147483647
Year: 2003
Pages: 399

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net