Data Structures for 2D Games

We have seen how hardware was the main limiting factor for game development. Clearly, coding Quake on 48KB sounds like a nice challenge. But 2D games use simpler and smaller data structures, and less-involved routines than more advanced 3D titles. We will now study these constituent elements of classic 2D games. Defining the structure to three key elements here is essential:

  • A way to encode the character graphics

  • A way to encode background images

  • A way to store the game map

Sprite-Based Characters

Let's take a look back in time to see how characters were represented in the early days of the animation industry. In the days of Walt Disney, each frame of each character was painted on cellophane sheets with transparent areas where the character would "see through." To produce the illusion of animation, drawings were layered and movement was simulated by swapping the frames from each animation. This technique became known as cel animation, from the cellophane sheets used in it.

Analogous to cellophane sheets, 2D games store each character in a rectangular, bitmapped graphic, so it can be drawn quickly onscreen. Additional information allows the game code to detect transparent areas, so our characters blend seamlessly with the backgrounds. Then, specific hardware and software is used to render these layers to the screen. Each of these bitmaps (with its associated transparency information) is called a sprite. The name comes from the early days of 2D games. Because most games involved ghosts, sprites, and knights, graphics adopted the name.

Sprites can be stored in a variety of ways depending on the hardware. A cell phone can have a black-and-white, 200x150 pixel display, whereas some newer sprite games on the PC can run at 800x600, 24-bit color with alpha effects. Again, memory and CPU performance are the limiting factors. The former would occupy 30,000 bits (black and white requires one bit per pixel only), which is about 4KB. The latter option would be in excess of 1MB. Next, I will explore some formats that have been popular through the years.

A first format stores the sprites in black and white (or any two given colors), and restricts sprite sizes to multiples of eight. Some cell phones, as well as 8-bit machines like the Spectrum, support this format. It is very convenient because each pixel in the sprite can be stored in one bit, and keeping sizes in multiples of eight makes the sprites easier to store in bytes. The Spectrum, for example, used 8x8, two-color sprites. The foreground and background colors are selected from a 16-color palette, so each of them requires 4 bits, for an extra byte total. Overall, this format uses 9 bytes per sprite only. Because the frame buffer is 256x176 pixels, or 32x23 sprites, the whole frame buffer occupies approximately 6KB (if the sprites are "flattened out" so we have a picture of the screen). For increased efficiency, many 8-bit machines do not have a frame buffer, but can work with a tile table and a buffer holding only sprite identifiers. This is the case with the NES (more on its architecture can be found in Chapter 1, "Chronology of Game Programming"). Assuming the same resolution as a Spectrum and one byte for each sprite, this representation system only requires 736 bytes (plus the sprite memory). Whichever method you choose, the low memory footprint comes at a cost. Each 8x8 square can only hold two colors, significantly degrading the visual richness.

A more involved option can be used if you are working on a display adapter that supports a palette of 16 colors, directly mappable to each pixel (so you can display more than two colors per sprite). Under these circumstances, each pixel can be coded to 4 bits, and every two pixels take one byte. Thus, in this case, the restriction is that sprite size needs to be a power of 2 (8, 16, 32, 64…). An 8x8 sprite will then require as much as 32 bytes. Again, a 256x176 display would take up 23KB, around four times more space than the preceding case. But we can represent much richer scenes.

Building on this last approach, we can encode sprites supporting up to 256 colors per pixel. These colors usually come from a fixed palette or a palette freely defined by the user. This is the case with the popular 320x200 PC format, where each palette entry can encode a 24-bit RGB triplet. Given these restrictions, our 8x8 sprite takes exactly 64 bytes, a display with the resolution of a Spectrum display would take 46KB, and a PC frame buffer running at 320x200 takes exactly 64,000 bytes. Add the space needed for the palette table (256 colors of 24 bits each, 768 bytes total), and the whole will fit perfectly in the 64KB found in one memory segment of our beloved PCs.

Moving up in the color scale we can encode high-color sprites (16 bits per pixel). Here, two options are available. First, you can choose to encode using 5-5-5-1 (five bytes for red, green, and blue, plus one for alpha). Second, you can encode using 6-5-5 and encode the transparency color as one of the color combinations that you are not really using, as in a chroma key approach.

In an ideal world, you could use true-color sprites, either in 24 or 32 bits if you want to work with alpha blending. Quite likely, 8 bits for alpha will be way too much, but this has the advantage that each pixel can be transferred in one double word (32 bits), which is convenient. However, by the time you begin working with true-color sprites, your hardware platform will most likely support 3D graphics. So that's the reason why 2D games using true color are much less popular than their 8-bit counterparts.

Now, we need to talk about transparency. Our sprites need to blend seamlessly with the background, so we need a way to encode the parts of the sprite that should leave the background unaffected. Several approaches have been used to achieve this effect.

One popular alternative is to use a separate 1-bit mask to encode transparent zones. This mask is used in the transfer process to ensure the blend is performed properly. In Figure 11.1, the mask is multiplied by the background, leaving in black the area that will later be occupied by the sprite. Then, by adding the sprite to the background directly, the blending is achieved.

Figure 11.1. Mask and sprite.


This approach is simple to code, but the mask takes a significant amount of memory, especially in animated sprites. For a 32x32, 256-color sprite (which itself takes 1KB), we would need 128 extra bytes for the mask. There is an alternative to this technique: reserving one color entry in our palette as the "transparent" color and storing only those pixels that encode nontransparent colors. In a 256-color palette, the loss of one color would be reasonable. But on a lower-color platform, such as a 16-color palette, losing one color for the sake of transparency might be unacceptable; thus, the masking approach would be preferred.

To produce the illusion of animation, sprites need to be layered on the screen quickly. At least 25 screen updates per second are required, but numbers in excess of 50 are not uncommon. This operation of layering sprites onscreen is called blitting. The name comes from the word "blit," which is a contraction of "block image transfer." Some platforms, such as early game consoles, employed specific hardware to perform these blits. Blitting engines were usually modified memory copy routines, because 2D screens are usually mapped into linear memory arrays.

Core Techniques and Algorithms in Game Programming2003
Core Techniques and Algorithms in Game Programming2003
Year: 2004
Pages: 261 © 2008-2017.
If you may any questions please contact us: