Chapter 2: What s in a Game?


The first thing most programmers think of when they start coding a game is the graphics system. They'll start with a DirectX sample, import some of their own miserable programmer art, put an environment map or a bump map on everything in sight, and shout "Eureka! The graphics system is finished! We'll be shipping our game by next weekend!"

By the time the next weekend rolls around, the same newbie game programmers will have a long laundry list of things that need to be done, and there are a number of subtle things that they will have completely missed. These hidden systems are usually the heart of every game, and you're never aware of them when you play games because you're not supposed to be aware of them. By the end of this chapter, you'll have a brief overview of the main components of game code and how they fit together. The rest of this book digs into the details of these systems and how they are built.

Display Technologies: A Quick Overview of the Issues

Display decisions include bit depth and resolution and the core graphics technology that pushes pixels: 2D, 3D, or a hybrid of the two. The Playstation and Gamecube coders out there can skip the resolution and bit depth section because those are chosen for you. Xbox and desktop coders are still in the discussion because you have some choices about what resolutions your game will support. Only the desktop coders need worry about bit depth. Everyone should worry about the core graphics technology.

Resolution and Bit Depth

It might seem crazy to support a 640x480 8-bit palettized mode in the 21st century. I can promise you it's equally crazy to assume that everyone buying computer games has flame-thrower CPU/video card rigs that can push 1280x1024x32 at 75 fps. There's really one major question that you need to answer before finalizing on a minimum acceptable resolution/bit depth: What will sell the most games?

The Microsoft Casino/Card projects I worked on at Compulsive Development were clearly marketed at the casual gamer—people like my parents. They have computers that are three to four years old and therefore look ancient to most programmers. The VRAM is 1/8 of what most developers use, 1/4 of the CPU speed, and 1/8 of the system RAM. Face it, these computers are better used as boat anchors than game platforms! Still, there are plenty of those boxes out there if you are going for the Wal-Mart crowd, a lucrative market indeed.

On the complete opposite side of the spectrum, if you are working on the latest and greatest graphics game specifically for the hard core shooter market, you'd be nuts to code your game to the same pathetic standard. Instead, go for a bit depth and resolution that can make the newest and best game rig eek out a solid 60fps. You'll be pretty sure that the early adopters will have the best rig to make your game look completely fantastic at its highest resolution.

Best Practice

For the longest time, Ultima fans had to upgrade their computer every time the new version was released. Look at these minimum specs: Ultima I-IV: Apple ][, Ultima V: IBM PC/CGA, Ultima VI: IBM PC/EGA or Hercules, Ultima VII: Intel 386/VGA, Ultima VIII: Intel 486/VGA, Ultima IX: Intel Pentium 233Mhz/VooDoo 3Dfx. Having worked on many of these games I realized that the programmers would always get the latest and greatest hardware at the beginning of the project and write the game to work well with that technology. The problem was that after 18 months of development, the code was already slow enough that the programmers were already screaming for hardware upgrades; after all those compile times were going to cut the company's productivity in half! It seemed that every Ultima was shipped requiring hardware that was considered bleeding edge only six to twelve months before. If you have a game that has a wide hardware requirement, make sure that at least one programmer has a second machine on his desk that matches the "dirtbag" minimum spec machine.

By the way, I think it is crazy to support 8-bit palettized displays in any game. This includes kids games or casual games. You can make a safe assumption that if a piece of hardware was bleeding edge five years ago, you'll find it in wide use in any casual gamer's hardware profile. Since the release of Windows 95, 16-bit displays were common, so I think it's a safe bet you don't have to support 8-bit displays. It's a good thing, too, because it's a real hassle to support 8-bit modes in a Windows compatible game.

Leaving 640x480 in the dust is a slightly harder decision if you are going for the low-end market. The two things that will limit you here are the ability to push pixels from system RAM to video memory (VRAM), and how much VRAM you have to play with. You'll find that low end machines with a 2Mb video card simply can't draw 800x600x16 screens with anything more than 35% overdraw before the frame rate will drop to the dismal teens.

Best Practice

Set every display related technology decision to meet your minimum frame rate goals. Consoles will require a minimum frame rate of 60fps, which is extremely difficult. Desktops can get away with much lower, but try to hit a minimum frame rate of 30fps.

To find your maximum theoretical frame rate, you first have to know how your game draws pixels: 2D, 3D, or a combination of the two. A sprite game generally copies memory from system RAM to VRAM, or from VRAM to VRAM, sometimes accomplishing some interesting blending tricks on the way. 3D games use software or hardware to fill a series of triangular areas of the screen with shaded color or texture data. Just in case it isn't completely obvious, these two methods are completely different and you have to use different methods of calculating your maximum theoretical frame rate.

A 2D sprite engine is easier to estimate. You can calculate your maximum theoretical frame rate by knowing something about your hardware and how you'll fill up the VRAM and system RAM. First look at the video hardware and find out the maximum transfer rate from system RAM to VRAM. There can be some tricky caveats to that, too, since some video cards transfer memory over different bus architectures (PCI, AGP, and so on), so be sure you have the proper transfer rate. Take a look at the transfer rate both directions, since you'll use a VRAM to system RAM copy to perform some of that interesting blending.

Once you have the transfer rates, take a look at your screen designs and try to estimate how much of the screen is going to be drawn every frame. You might discover that this figure exceeds 100% of the pixels. For example, if you have a multi-layered side scrolling fighting game, you'll be redrawing the background every frame as well as anything on top, such as the characters or special effects.

Take special note of anything that will use alpha blending or chroma-key. A completely opaque sprite, a rectangular shape that has no transparent or semi-transparent pixels, is essentially copied from memory to VRAM, an extremely fast operation. A sprite that has some completely transparent pixels must check the source sprite for a particular color value or chroma key before copying each pixel value. This is much slower, as pixel-per-pixel operations/comparisons use more CPU horsepower. The worst case, of course, is any sprite that uses semi-transparent pixels. This case requires both the source and destination pixels to have a blending calculation or lookup, which is much slower than a simple comparison and branch.

Clearly if your game design requires multiple layers of semi transparent sprites with an average of 180% pixels drawn per 800x600 frame, you won't accomplish this on an old Pentium 133Mhz and 2Mb video card. In fact, you'll find that 130% screen overdraw with simple chroma keyed sprites will still kill the frame rate of this old beast. If you are unlucky enough to be in the middle of this problem, you only have one reasonable solution: find a way to draw fewer pixels per frame. A good suggestion here is to attempt a dirty rectangle solution, where you draw only those parts of the screen that actually change.

Figuring your maximum frame rate on 3D games is a much trickier business, since there are many variables to consider. I'll start by making the assumption that your game is using VRAM intelligently for texturing and you are not thrashing textures into and out of VRAM. I'll also make the assumption that your rendering engine has a nice tricky way of skipping all hidden polygons, minimizing pixel overdraw. The big variables then become your polygon fill rate and your vertex transform rate. The fill rate is a measure of how fast your rasterizer can fill a polygonal area with the proper pixel values using lighting and texturing sources. The transform rate is how fast the rendering engine can message 3D world data and camera position into a list of 2D screen polygons with appropriate lighting data. Finally, many renderers put a reasonable cost on switching rendering states, such as loading new textures.

The thing that makes estimating these two variables and how they will affect your frame rate difficult is that most games don't have a scene that can be considered a standard benchmark. Good game and art designers will recognize that different scenes need varying levels of geometry and texture density. So what do you do?

If you are using an off the shelf renderer like Renderware or Intrinsic Alchemy, you've got excellent reference material in the games that have already shipped with the most recent version. Every middleware company out there will constantly improve their technology, and you can also count on getting some one-on-one help to get you over a frame rate hump. If set your game technology to target the upper end of the resolution and scene density of the top end of current games on the market, you'll be in good position to hit that target. If you happen to have someone like John Carmack on your team, of course, you can go a lot farther than that! The bottom line is you must take your team strength into consideration when choosing how far you want to push your maximum resolution.

Core Display Technology: 2D or 3D

Almost any game design can be executed in a 2D or 3D graphics engine. After all, PacMan_ was just a first generation shooter, wasn't it? I think it's a mistake to think that you have to choose a hot 3D graphics engine in order to sell more games. Instead, I'd like to believe that an excellent 2D game can set the world on fire simply due to a unique and focused game design. If you disagree with me you've probably never played Bedazzled or Tetris. The truth is that 2D and 3D graphics engines bring unique visualization possibilities to the table.

Before you read one more sentence make sure you understand an important point: camera perspective has very little to do with the display engine. This brash statement deserves repeating: Any game design requiring a particular perspective (first person, third person, and so on) can work in a 2D or 3D graphics engine if you are willing to make some concessions. The differences you'll find are primarily in five areas:

  • Camera movement

  • Game world design

  • Art and animation

  • Special effects

  • Player expectation

  • Cost and difficulty of production

Camera Movement

Camera movement is a primary constraint in a 2D engine. A 2D sprite engine can certainly depict a beautiful 3D world as was done in Myst or Riven, but it will never be able to move through that world like the player viewpoint in Unreal Tournament.

In Riven, the camera viewpoint didn't change in a continuous manner as you played the game (see Figure 2.1). The camera position and orientation were locked down and the player would interact with items they could see directly in front of the camera. The immense detail of the scene was simply too expensive to draw in real time. Pre-rendered backgrounds and environments are a great compromise to get a fantastic look at the expense of freedom of movement; so adventure games or puzzle games can make great use of this technique.

click to expand
Figure 2.1: Screenshots from Riven by Cyan.

Diablo II was a fast-paced fantasy adventure game and the camera followed the movements of the player as shown in Figure 2.2. Because the camera never changed orientation, it was possible to draw the world with an enormous set of prebuilt pieces. Ultima games from Ultima I all the way to Ultima VIII were done in the same way.

click to expand
Figure 2.2: Screenshots of Diablo II by Blizzard.

It's important to note here that games like Ultima may have been drawn with sprites, but the game world had three dimensions. After all, how could the Avatar walk up a flight of stairs or climb on top of a roof if the world didn't have any depth?

Of course, lots more games use a simple sprite engine. This is quite common in kids games, trivia games, and other casual entertainment (see Figure 2.3).

click to expand
Figure 2.3: Screenshot of Hoyle Cards by Sierra and Marvel vs. Capcom 2 by Marvel/Capcom.

Games that use 3D engines, on the other hand, have little or no constraints on their camera movement. The only thing you have to do is make sure that if the camera can look straight up and see the sky, that you put one there worth looking at!

A Tale from the Pixel Mines

Working on Ultima games was a dream come true for me, since I've played every single one of them, although I never finished Ultima VI for some reason. Richard wanted to move Ultima to a 3D engine for Ultima IX: Ascension and it was a real nightmare. Ultima games have always pushed a little too far with scene complexity and Ultima IX was no exception. The original design was going to use a third person camera in the same vein as classic Ultima games, but this time the camera was going to be able to swing around and show you the north and east sides of all the buildings! Richard really wanted to have a point of view camera, so the team obliged and put the camera right on the Avatar's shoulders. For the first time since Richard had been making Ultima games, some 18 years, he was able to walk right up to Lord British's castle gates and look up at the tower. It was nice to have a castle taller than 64 pixels.

Game World Design

The constraint you'll find in 3D engines is on your game world design. Perhaps that's an unfair statement. 3D engines will draw all the polygons you stuff into the graphics processing unit (GPU) even if it takes them forever. Forever, by the way, is defined as anything more than 50ms. The real problem a 3D engine has is choosing which polygons to draw to make the most compelling scene.

Consider the problem of a fight simulator like Microsoft Flight Simulator. When the plane is on the ground, the display looks a lot like every other 3D game out there. You see a few buildings, a few other planes, and a runway. You might also see some scenery in the distance like a mountain range or a city skyline (see Figure 2.4).

click to expand
Figure 2.4: Screenshots of Microsoft Flight Simulator 2002 by Microsoft.

Once the plane is up in the air you have a different story altogether. You've increased the viewable surface by a few orders of magnitude, and therefore the potential viewable set of polygons. Anyone who attempts a na ve approach of simply drawing all the polygons will quickly learn that they can't get their plane more than 150 feet off the ground. The frame rate will fall in inverse geometric proportion to the altitude of the plane, because that's how many more polygons you have to draw to achieve a realistic look.

The actual approach to this problem uses different levels of detail to draw areas of terrain and objects depending on their distance from the viewer. On some flight simulators you can catch this happening. Simply begin a slow descent and watch as the terrain suddenly becomes better looking; the green patches will increase in detail and eventually become individual trees until you crash into them. One of the trickier parts of most 3D engines is getting the levels of detail to transition smoothly, avoiding the "popping" effect.

Another problem is avoiding overdraw. If your game is in a complex interior environment you'll achieve the fastest frame rate if you only draw the polygons that you can see. Again the naive approach is to simply draw all of the polygons in the view frustum, omitting any that are facing away from the camera. This solution will most likely result in extremely choppy frame rates in certain areas but not others, even if the camera is pointed straight at an interior wall. When the game is bogging down like this it is drawing an enormous number of polygons behind the wall, only to be covered up by the bigger polygons close to the camera. What a waste!

You'll need some advanced tools to help you analyze your level and calculate what areas can be seen given a particular viewing location. RenderwareTM has a pretty good tool to do this. This analysis takes a long time, but the increase in performance is critical. Competitive games are all pushing the envelope for the illusion of extremely complicated worlds. The trick is to create these worlds such that the lines of sight are very short in areas of dense geometry and very long in areas of sparse geometry. Add to that mix of technology some nice levels of detail, and you can get a game that looks as good as Grand Theft Auto: Vice City, as shown in Figure 2.5.

click to expand
Figure 2.5: Screenshots of Grand Theft Auto— Vice City by Rockstar.

Since 3D engines are only capable of drawing so much scenery per frame, an amazing amount of effort must go into creating the right level design. Any environment that is too dense must be fixed, or the frame rate will suffer and so will your reviews.

Gotcha

The most common mistake made on 3D games is not communicating with the artists about what the graphics engine can and can't do. Remember that the world environment is just a backdrop, and you'll still need to add interactive objects, characters, special effects, and a little bit of user interface before you can call it a day. All these things, especially the characters, will drag your performance into the ground if the background art was too aggressive. Try to establish CPU budgets for drawing the background, objects, characters, and special effects early on and hold your artists and level designers to it like glue. Measure the CPU time spent preparing and rendering these objects and display it for all to see.

Art and Animation

A clear difference between 2D and 3D display technologies is the creation and presentation of art assets. While 2D sprites and 3D textures are quite similar, nothing in a 2D engine resembles animating or morphing 3D meshes. As you might expect, a 3D engine can use any 2D sprite trickery in its textures, making you think at first glance that a 3D engine will always be superior to a 2D engine.

A closer look at the visual results of a well directed 2D effort will cause you to think twice. Perhaps you'll come to believe that there is a place for 2D engines even in today's market. Now that I've confused you into second guessing your decision to go with the latest 3D engine, let's study the issues a bit and try to bring some focus to the problem. Technology aside, your decision to use 2D versus 3D will have interesting side effects:

  • Storage requirements of dynamic objects, such as characters

  • The "look and feel" of your game

Let's compare two games from Origin Systems: Ultima Online and Ultima IX. Ultima Online's technology was sprite based, while Ultima IX was a custom 3D engine. Both games are shown in Figure 2.6.

click to expand
Figure 2.6: Screenshots of Ultima IX— Ascension and Ultima Online by Origin Systems.

What's interesting and somewhat unexpected is that the art for both games was created by 3D Studio Max. Given the same camera angle, both characters look very similar. When you watch them move, however, you can begin to understand what concessions you have to make if you choose 2D.

Ultima Online characters were pre-rendered, which means that every conceivable position of their bodies had to be saved in a huge set of sprites that would be animated in the right sequence to make them move. The only difference between that manner of animation and how Tex Avery brought Bugs Bunny to life was the fact that Tex used pen and ink instead of 3D Studio Max.

What made matters worse for the Ultima Online artists is that they had to pre-render all possible animations in each direction any character or monster could walk: sixteen points of the compass in all. Table 2.1 shows the ramifications of this.

Table 2.1: Storage Required for Different Activities.

Activity

Storage

Total animations (walk, run, stand, idle, kick, jump, crouch, )

24

Average frames of animation (2 seconds at 15 frames per second)

30

Total directions of the compass (N, NNW, NW, WNW, W, )

16

Total frames of animation (24 * 30 * 16)

11,520

Uncompressed size of one frame of animation assuming 50x50 pixels, 8 bit art (50 * 50)

2.44Kb

Total uncompressed animation size (11,520 * 2.44Kb)

27.41 Mb

This seems completely unreasonable, doesn't it? How can so many games fit so much animation data on the CD-ROM? A little compression and some cheating will cut that number down to size. Most compression techniques will give you at least a 3:1 compression ratio on sprites, especially since a good portion of those sprites are made up of the background color. You'll want a compression algorithm that has a blazing decompression solution or you'll notice some nasty slowdowns in your game. You'll also want to be smart about decompressing sprites only when you need to; preferably when a level is loading. Compressing the animation sprites brings our storage of the main character down to 9.13Mb:

 Uncompressed animations @ 27.41 Mb compressed at 3:1    = 9.13 Mb 

The next little trick requires a bit of concession on your part to make your characters ambidextrous. Take a look at the two characters shown in Figure 2.7.

click to expand
Figure 2.7: Ambidextrous Creatures from Ultima Online.

It's subtle, but notice that the same creature holds his weapon differently depending on what direction he's facing. Since the character is ventrally symmetrical it makes little difference if you draw a mirror image instead of an actual image. The only hang up is the handedness of the character. When the mirror image is drawn, a weapon that sits in the right hand will suddenly appear in the left hand. At first you might think this will detract from the game. When you consider this technique can store nearly twice the number of animations in the same space you'll come to the conclusion that gamers want those animations and could care less about whether they are beating you up left handed.

It turns out that all the eastward or westward sprites can be flipped, so our sixteen directions can drop to nine. This takes the total amount of storage space down to a much more reasonable number:

 Dropping to 9 directions from the original 16          = 5.13Mb 

Assuming that the main character has four or perhaps five times the number of animations as every other character in your game, you'll be happy to find that even 50Mb will give you plenty of room to store people, animals, monsters, and anything else you'd like to dream up. 3D games store their art very differently, as you would expect. The original dataset for a dynamic object in a 3D game contains, at a minimum, the following things:

  • Mesh: A definition of the geometry of the character given by lists of vertices, triangles, and bones.

  • Textures: A set of textures that will be applied to the mesh.

  • Animation Data: Lists of positions, orientations, and timing that can be applied to some or all parts of the character mesh.

For comparison purposes let's construct a character similar to the sprite based one we just studied. This character should have similar animation characteristics.

A character rendered offline in 3D Studio Max can have tens of thousands of polygons. If you recall the number of actual pixels in the entire 2D sprite, 2,500, you'll quickly realize that there's no use in creating a model of that complexity for real-time display. In fact, you'll probably be satisfied with no more than 300 or 400 polygons if the character is only taking up l/10th of the screen height. I think the first rendering of the Avatar in Ultima XI was down to below 200 polygons, but it was pretty obvious in the joints and he was wearing a full-face helmet. The polygon count per character is purely a judgment call and there's no hard and fast rule except for get it as small as you can.

A 3D character of similar complexity to our 2D one would have very different storage characteristics as shown in Table 2.2.

Table 2.2: Storage Characteristics for a 3D Character.

Activity

Storage

Total animations (walk, run, stand, idle, kick, jump, crouch, ...)

24

Average keyframes per animation (1 keyframe every 100ms, average animation is 2 seconds long)

20

Total directions of the compass

1

Total animation keyframes (24 * 20 * 1)

480

Uncompressed size of animation keyframe (25 bones with position and orientation data, 28 bytes each)

700 bytes

Total uncompressed animation size (240 * 700 bytes)

~328 Kb

Uncompressed size of the character mesh (400 polys @ ~20 bytes each)

8,000 bytes

Uncompressed size of the character texture (64x64, 16 bit)

8,192 bytes

Total uncompressed size of the character:

~344 Kb

It doesn't take a moron to notice the staggering difference in the space savings if you go for a 3D representation of your characters. You don't get any of this for free; the amount of CPU horsepower it takes to draw one horizontal line of pixels on a textured and lit polygon on a hierarchical 3D character running around a 3D world is a hell of a lot more than a mov instruction.

That extra CPU budget buys you plenty of things you simply can't get with a sprite engine. One of the best examples is mixing animations. Since animation data is stored as positions and orientations of the bones they affect from keyframe to keyframe it is a simple matter to mix two animations even if they affect the same bones. If the main character had a limping animation it could be mixed with the walk animation in varying degrees as the main character took damage. The effect is a subtle and continuous degradation of the character's walk. A sprite engine would simply use one animation or the other depending on the level of damage. Clearly the 3D engine provides a better solution.

Sprite games like Ultima Online: The Second Age put amazing effort into the ability for characters to wear different kinds of clothing or carry different weapons. Since it is a sprite game, each piece of clothing and each weapon must be pre-rendered in place for each possible animation. Imagine that the number of frames for every item in your game world had as many frames of animation as each main character. Ultima Online has tens of thousands of individual sprites to achieve this look.

A 3D game on the other hand can simply attach a weapon like a sword or a gun to an attachment point at the end of the main character's arm, essentially creating one more bone of animation data. This adds only about 13Kb to the animation data, and it works the same way for all weapons of that type.




Game Coding Complete
Game Coding Complete
ISBN: 1932111751
EAN: 2147483647
Year: 2003
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net