Enough Math - Please Stop


Enough Math—Please Stop

I'm done torturing you with linear algebra, but I'm not quite done with geometry. Hang in there because you'll soon find out some interesting things about triangles.

Triangles

Did you know that everything from teapots to cars to volleyball-playing beach bunnies can be made out of triangles? We all know that a geometric triangle is made up of three points. In a 3D world, a triangle is composed of three vertices. A vertex holds all of the information the renderer will use to draw the triangle, and as you might expect there can be a lot more than its location in a 3D space.

Different renderers will support different kinds of triangles, and therefore different kinds of vertices that create those triangles. Once you get your feet wet with one rendering technology, such as DirectX 9, you'll quickly find analogs in any other rendering technology, such as OpenGL. Since I've already sold my soul to Bill Gates, I'll show you how you create vertices in DirectX 9. A DirectX 9 vertex is a structure you define yourself. When you send your vertex data to the renderer you send in a set of flags that inform it about the contents of the vertex data.

Gotcha

You may define the structure yourself, but DirectX 9 expects the data in the structure to exist in a particular order. Search for "Vertex Formats" in the DirectX SDK to see this order. All hell will break loose if you don't.

First, you should understand the concepts of a transformed vertex versus an untransformed vertex. A transformed vertex is defined directly in screen space. It doesn't need the transformations we discussed in the last section—object to world, world to camera, camera to screen. You would use this kind of vertex to build triangles that create user interface components, since they don't need to exist in anything else but screen space.

A Tale from the Pixel Mines

Don't think that you can easily get away with defining triangles in screen space that "look" like they exist in world space. On the first Microsoft Casino project, we defined our card animations in screen space. Every corner of every card was painstakingly recorded and entered into the card animation code. These coordinates looked fairly good, but the second we needed to tweak the camera angle all the coordinates had to be recomputed, rerecorded, and entered into the code by hand. It seemed like a good idea at the time but we finally ditched this approach in favor of real cards animating through world space.

An untransformed vertex exists in object space, like the triangles that make up our teapot. Before the triangles are drawn, they'll be multiplied with the concatenated matrix that represents the transformations that will change a location in object space to projected screen space. Here's how you define a DirectX 9 vertex structure for a transformed vertex, and an untransformed vertex:

 struct TRANSFORMED_VERTEX {    D3DXVECTOR3 position;     // The screen x, y, z - x,y are pixel coordinates    float rhw;                // always 1.0, the reciprocal of homogeneous w }; #define D3DFVF_TRANSFORMED_VERTEX (D3DFVF_XYZRHW) struct UNTRANSFORMED_VERTEX {     D3DXVECTOR3 position;    // The position in 3D space }; #define D3DFVF_UNTRANSFORMED_VERTEX (D3DFVF_XYZ) 

The #defines below the vertex definitions are the flags that you send into renderer calls that inform the renderer how to treat the vertex data. A renderer needs to know more than the location of a vertex in 3D space or screen space. It needs to know what it looks like. There are a few categories of this appearance information, but the first one on your list is lighting and color.

Lighting, Normals, and Color

In DirectX 9 and many other rendering technologies you can assign colors to vertices yourself, or you can instruct the renderer to calculate those colors by looking at vertex data and the lights that illuminate the vertex. You can even do both. Everyone has seen games that show subtle light pools shining on walls and floors—a nice effect but completely static and unmoving. Other illumination is calculated in real time, such as when your character shines a flashlight around a scene. Multiple lights can affect individual vertices, each light adding a color component to the vertex color calculation.

Two flavors of dynamic lighting effects are diffuse and specular lighting. DirectX 9 can calculate these values for you if you want to send unlit vertices to the renderer, but you can also set the diffuse and specular colors directly. If you want the renderer to calculate the lighting values—a good idea because many 3D cards have hardware acceleration for lighting—you have to create a normal vector as a component of your vertex.

When light hits an object, the color of light becomes a component of the object's appearance. Perform a little experiment to see this in action: Take a playing card, like the ace of spades, and place it flat on a table lit by a ceiling lamp. The card takes on a color component that reflects the color of that lamp. If your lamp is a fluorescent light the card will appear white with a slight greenish tint. If your lamp is incandescent the card will take on a slightly yellowish color.

If you take the card in your hand and slowly turn it over, the brightness and color of the card changes. As the card approaches an edge on orientation to the lamp, the effects of the lighting diminish to their minimum. The light has its maximum effect when the card is flat on the table, and its minimum effect when the card is edged on to the light. This happens because when light hits a surface at a low angle it spreads out and has to cover a larger area with the same number of photons. This gives you a dimming effect.

Diffuse lighting attempts to simulate this effect. With the ace sitting flat on the table again, take a pencil and put the eraser end in the middle of the card and point the tip of the pencil straight up in the air, towards your ceiling lamp. You've just created a normal vector. Turn the card as before, but hold the pencil and turn it as well, as if it were glued to the card. Notice that the light has a maximum effect when the angle between the pencil and the light is 180 degrees and minimum effect when the angle between the light and the pencil is 90 degrees, and no effect when the card faces away from the light.

Each vertex gets its own normal vector. This might seem like a waste of memory, but consider this: If each vertex has its own normal, you can change the direction of the normal vectors to "fool" the lighting system. You can make the 3D object take on a smoother shading effect. This is a common technique to blend the edges of coincident triangles. The illusion you create allows artists to create 3D models with fewer polygons.

The normals on the teapot model are calculated to create the illusion of a smooth shape as shown in Figure 9.13.

click to expand
Figure 9.13: Vertex Normals on the Teapot.

Now that you know what a normal vector is, you need to know how to calculate one. If you want to find the normal vector for a triangle you'll need to use a cross product as shown here:

 D3DXVECTOR3 triangle[3]; triangle[0] = D3DXVECTOR3(0,0,0); triangle[1] = D3DXVECTOR3(5,0,0); triangle[2] = D3DXVECTOR3(5,5,0); D3DXVECTOR3 edge1 = triangle[1]-triangle[0]; D3DXVECTOR3 edge2 = triangle[2]-triangle[0]; D3DXVECTOR3 temp, normal; D3DXVec3Cross(&temp, &edge1, &edge2); D3DXVec3Normalize(&normal, &temp); 

Our triangle is defined with three positions in 3D space. These positions are used to construct two edge vectors, both pointing away from the same vertex. The two edges are sent into the cross product function, which returns a vector that is pointing in the right direction, but the wrong size. All normal vectors must be exactly one unit in length to be useful in other calculations, such as the dot product. The D3DXVec3Normalize function calculates the unit vector by dividing the temp vector by its length. The result is a normal vector you can apply to a vertex.

If you take a closer look at the teapot figure, you'll notice that the normal vectors are really the normals of multiple triangles, not just a single triangle. You calculate this by averaging the normals of each triangle that shares your vertex. Calculate the average of multiple vectors by adding them together, and dividing by the number of vectors, exactly as you would calculate the average of any other number.

Gotcha

Calculating a normal is an expensive operation. Each triangle will require two subtractions, a cross product, a square root, and three divisions. If you create 3D meshes at run time, try to calculate your normals once, store them in object space, and use transforms to reorient them.

Specular lighting is calculated slightly differently. It adds "shinyness" to an object by simulating the reflection of the light on the object. The light calculation takes the angle of the camera into account along with the normal vector of the polygon and the light direction.

You might be wondering why I didn't mention ambient lighting—a color value that is universally applied to every vertex in the scene. This has the effect of making an object glow like a light bulb, and isn't very realistic. Ambient lighting values are a necessary evil in today's 3D games, because they simulate low light levels on the back or underside of objects due to light reflecting all about the scene. In the next few years, I expect this light hack to be discarded completely in favor of the latest work with pixel shaders and environment based lighting effects. I can't wait!

Here are the DirectX 9 vertex definitions for lit and unlit vertices:

 struct UNTRANSFORMED_LIT_VERTEX {     D3DXVECTOR3 position;   // The position in 3D space     D3DCOLOR    diffuse;    // The diffuse color     D3DCOLOR    specular;   // The specular color }; #define D3DFVF_UNTRANS_LIT_VERTEX (D3DFVF_XYZ | D3DFVF_DIFFUSE |                                    D3DFVF_SPECULAR) struct UNTRANSFORMED_UNLIT_VERTEX {     D3DXVECTOR3 position;   // The position in 3D space     D3DXVECTOR3 normal;     // The normal vector (must be 1.0 units in length)     D3DCOLOR    diffuse;    // The diffuse color     D3DCOLOR    specular;   // The specular color }; #define FVF_UNTRANS_UNLIT_VERT     \    (D3DFVF_XYZ | D3DFVF_NORMAL | D3DFVF_DIFFUSE | D3DFVF_SPECULAR) 

Notice that both vertex definitions were of the untransformed variety, but there's nothing keeping you from making the transformed versions of these things. It's entirely up to you and what you need for your game. Remeber that the transformed versions will bypass the transformation and lighting pipeline entirely. The transformation and lighting pipeline are inseparable.

Note also that the unlit vertex still had definitions for diffuse and specular color information. This is kind of like having the best of both worlds. You can set specific diffuse and specular lighting on each vertex for static lights and the renderer will add any dynamic lights if they affect the vertex.

Textured Vertices

A texture is a piece of two-dimensional art that is applied to a model. Each vertex gets a texture coordinate. Texture coordinates are conventionally defined as (U,V) coordinates, where U is the horizontal component and V is the vertical component. Classically these coordinates are described as floating-point numbers where (0.0f,0.0f) signifies the top left of the texture and grows to the left and down. The coordinate (0.5f, 0.5f) would signify the exact center of the texture. Each vertex gets a texture coordinate for every texture. DirectX 9 supports up to eight textures on a single vertex.

Here's an example of a vertex with a texture coordinate:

 // A structure for our custom vertex type. We added texture coordinates struct COLORED_TEXTURED_VERTEX {     D3DXVECTOR3 position; // The position     D3DCOLOR    color;    // The color     FLOAT       tu, tv;   // The texture coordinates }; // Our custom FVF, which describes our custom vertex structure #define D3DFVF_COLORED_TEXTURED_VERTEX (D3DFVF_XYZ|D3DFVF_DIFFUSE|D3DFVF_TEX1) 

This vertex happens to include a diffuse color component as well, and you should also be able to tell by the flags that this vertex is untransformed, which means it exists in 3D world space, as opposed to screen space. This kind of vertex is not effected by any dynamic lighting in a scene, but it can be prelit by an artist, creating nicely lit environments. This vertex is also extremely efficient, since it isn't sent into the lighting equations.

Numbers greater than 1.0 can tile the texture, mirror it, or clamp it depending on the addressing mode of the renderer. If you wanted a texture to tile three times in the horizontal direction and four times in the vertical direction on the surface of a single polygon, the texture (U,V) coordinate that would accomplish that task would be (3.0f, 4.0f). Numbers less than 0.0f are also supported. They have the effect of mirroring the texture.

Other Vertex Data

If you happen to have the DirectX SDK documentation open and you are following along, you'll notice that I skipped over a few additional vertex data components such as blending weight and vertex point size, and also tons of texturing minutia. All I can say is that these topics are beyond the scope of this simple 3D primer. I hope you'll forgive me and perhaps write a note to my publisher begging for me to write a more comprehensive book on the subject. That is, of course, if my wife ever lets me write another book. You have no idea how much housework I've been able to get out of by writing.

Triangle Meshes

We've been talking so far about individual vertices. Its time to take that knowledge and create some triangle meshes. There are three common approaches to defining sets of triangles:

  • Triangle List: A group of vertices defines individual triangles, each set of three vertices defines a single triangle.

  • Triangle Strip: A set of vertices that define a strip of connected triangles; this is more efficient than a triangle list because fewer vertices are duplicated. This is probably the most popular primitive because it is efficient and can create a wide variety of shapes.

  • Triangle Fan: Similar to a triangle strip, but all the triangles share one central vertex; also very efficient.

When you define sets of vertices in DirectX 9, you put them in a vertex buffer. The vertex buffer is sent to the renderer in one atomic piece, which implies that every triangle defined in the buffer is rendered with the current state of the renderer. Every triangle will have the same texture, be affected by the same lights, and so on.

This turns out to be a staggeringly good optimization. The teapot you saw earlier in this chapter required 2,256 triangles and 1,178 vertices but it could be drawn with around 50 triangle strips. It turns out that DirectX meshes are always triangle lists. Lists or strips are much faster than sending each triangle to the card and rendering it individually, which is what happened in the dark ages—circa 1996.

In DirectX 9, you create a vertex buffer, fill it with your triangle data, and then use it for rendering at a time of your choosing:

 class Triangle {    LPDIRECT3DVERTEXBUFFER9 m_pVerts;    DWORD m_numVerts; public:    Triangle() { m_pVerts = NULL;   m_numVerts = 3; }    ~Triangle() { SAFE_RELEASE(m_pVerts); }    HRESULT Create(LPDIRECT3DDEVICE9 pDevice);    HRESULT Render(LPDIRECT3DDEVICE9 pDevice); }; HRESULT Triangle::Create(LPDIRECT3DDEVICE9 pDevice) {     // Create the vertex buffer.    m_numVerts = 3;     if( FAILED( pDevice->CreateVertexBuffer(         m_numVerts*sizeof(TRANSFORMED_VERTEX),            D3DUSAGE_WRITEONLY, D3DFVF_TRANSFORMED_VERTEX,            D3DPOOL_MANAGED, &m_pVerts, NULL ) ) )     {         return E_FAIL;     }     // Fill the vertex buffer. We are setting the tu and tv texture     // coordinates, which range from 0.0 to 1.0     TRANSFORMED_VERTEX* pVertices;     if( FAILED( m_pVerts->Lock( 0, 0, (void**)&pVertices, 0 ) ) )         return E_FAIL;    pVertices[0].position = D3DXVECTOR3(0,0,0);    pVertices[0].rhw = 1.0;    pVertices[1].position = D3DXVECTOR3(0,50,0);    pVertices[1].rhw = 1.0;    pVertices[2].position = D3DXVECTOR3(50,50,0);    pVertices[2].rhw = 1.0;    m_pVerts->Unlock();    return S_OK; } 

This is a simple example of creating a vertex buffer with a single triangle, and a transformed one at that. The call to CreateVertexBuffer is somewhat scary looking, but all it does is set up a piece of memory the right size, the kind of vertex that will inhabit the buffer, and how the memory will be managed.

Once the buffer is created, you have to lock it before writing data values. This should remind you somewhat of locking a 2D surface. The single triangle has three vertices—no surprise there. Take a quick look at the position values and you'll see that I've defined a triangle that will sit in the upper left-hand corner of the screen with a base and height of 50 pixels. This triangle is defined in screen space, since the vector is defined as a transformed vertex.

When I'm ready to render this vertex buffer, I call this code:

 HRESULT Triangle::VRender(LPDIRECT3DDEVICE9 pDevice) {    pDevice->SetStreamSource( 0, m_pVerts, 0, sizeof(TRANSFORMED_VERTEX) );    pDevice->SetFVF( D3DFVF_TRANSFORMED_VERTEX );    pDevice->DrawPrimitive( D3DPT_TRIANGLELIST , 0, 1 );    return S_OK; } 

The first call sets the stream source, or vertex buffer to our triangle. The second call tells D3D what kind of vertices to expect in the stream buffer, using the flags that you or'ed together when you defined the vertex structure. The last call to DrawPrimitive actually renders the triangle.

Gotcha

You can't call any drawing functions in Direct3D without first calling IDirect3D9Device::BeginScene, and you must call IDirect3DDevice::EndScene when you are done drawing! The example above encapsulates the rendering of a single triangle and would be called only from within the context of the beginning and ending of a scene.

Indexed Triangle Meshes

There's one more wrinkle to the defining triangle meshes: Instead of sending vertex data to the renderer alone you can send an index along with it. This index is an array of 16- or 32-bit numbers that define the vertex order, allowing you to avoid serious vertex duplication and therefore save memory. Let's take a look at a slightly more complicated mesh example. Here's the code that created the grid mesh in the teapot example:

 class Grid { protected:    LPDIRECT3DTEXTURE9       m_pTexture;    // the grid texture    LPDIRECT3DVERTEXBUFFER9  m_pVerts;      // the grid verts    LPDIRECT3DINDEXBUFFER9   m_pIndices;    // the grid index    DWORD                    m_numVerts;    DWORD                    m_numPolys; public:    Grid();    ~Grid();    HRESULT Create(LPDIRECT3DDEVICE9 pDevice,              const DWORD gridSize,              const DWORD color);    HRESULT Render(LPDIRECT3DDEVICE9 pDevice); }; Grid::Grid() {    m_pTexture = NULL;    m_pVerts = NULL;    m_pIndices = NULL;    m_numVerts = m_numPolys = 0; } Grid::~Grid() {    SAFE_RELEASE(m_pTexture);    SAFE_RELEASE(m_pVerts);    SAFE_RELEASE(m_pIndices); } HRESULT Grid::Create(    LPDIRECT3DDEVICE9 pDevice,    const DWORD gridSize,    const DWORD color) {    if( FAILED( D3DUtil_CreateTexture(         pDevice, "Textures\\Grid.dds", &m_pTexture ) ) )    {      return E_FAIL;    }    // Create the vertex buffer - we'll need enough verts   // to populate the grid. If we want a 2x2 grid, we'll   // need 3x3 set of verts.   m_numVerts = (gridSize+1)*(gridSize+1);    if( FAILED( pDevice->CreateVertexBuffer(     m_numVerts*sizeof(COLORED_TEXTURED_VERTEX),       D3DUSAGE_WRITEONLY, D3DFVF_COLORED_TEXTURED_VERTEX,       D3DPOOL_MANAGED, &m_pVerts, NULL ) ) )    {       return E_FAIL;    }    // Fill the vertex buffer. We are setting the tu and tv texture    // coordinates, which range from 0.0 to 1.0    COLORED_TEXTURED_VERTEX* pVertices;    if( FAILED( m_pVerts->Lock( 0, 0, (void**)&pVertices, 0 ) ) )        return E_FAIL;   for( DWORD j=0; j<(gridSize+1); j++ )    {     for (DWORD i=0; i<(gridSize+1); i++)     {       // Which vertex are we setting?       int index = i + (j * (gridSize+1) );       COLORED_TEXTURED_VERTEX *vert = &pVertices[index];       // Default position of the grid is at the origin, flat on       // the XZ plane.       float x = (float)i;       float y = (float)j;       vert->position =         ( x * D3DXVECTOR3(1,0,0) ) + ( y * D3DXVECTOR3(0,0,1) );       vert->color   = color;       // The texture coordinates are set to x,y to make the       // texture tile along with units - 1.0, 2.0, 3.0, etc.       vert->tu       = x;       vert->tv       = y;     }   }    m_pVerts->Unlock();    // The number of indicies equals the number of polygons times 3    // since there are 3 indicies per polygon. Each grid square contains    // two polygons. The indicies are 16 bit, since our grids won't    // be that big!    m_numPolys = gridSize*gridSize*2;    if( FAILED( pDevice->CreateIndexBuffer(              sizeof(WORD) * m_numPolys * 3,              D3DUSAGE_WRITEONLY, D3DFMT_INDEX16,              D3DPOOL_MANAGED, &m_pIndices, NULL ) ) )    {      return E_FAIL;    }     WORD *pIndices;     if( FAILED( m_pIndices->Lock( 0, 0, (void**)&pIndices, 0 ) ) )         return E_FAIL;    // Loop through the grid squares and calc the values    // of each index. Each grid square has two triangles:    //    //   A - B    //   | / |    //   C - D    for( DWORD j=0; j<gridSize; j++ )     {       for (DWORD i=0; i<gridSize; i++)       {         // Triangle #1  ACB         *(pIndices) = WORD(i + (j*(gridSize+1)));         *(pIndices+1) = WORD(i + ((j+1)*(gridSize+1)));         *(pIndices+2) = WORD((i+1) + (j*(gridSize+1)));         // Triangle #2  BCD         *(pIndices+3) = WORD((i+1) + (j*(gridSize+1)));         *(pIndices+4) = WORD(i + ((j+1)*(gridSize+1)));         *(pIndices+5) = WORD((i+1) + ((j+1)*(gridSize+1)));         pIndices+=6;       }    }     m_pIndices->Unlock();    return S_OK; } 

I've commented the code pretty heavily to help you understand what's going on. An index buffer is created and filled in much the same way as vertex buffers. Take a few minutes to stare at the code that assigns the index numbers—it's the last nested for loop. If you have trouble figuring it out, trace the code with a 2x2 grid and you'll get it.

This code creates an indexed triangle list. If you wanted to be truly efficient you'd rewrite the code to create an indexed triangle strip. All you have to do is change the index buffer. I'll leave that to you. If you can get that working you'll know you have no trouble understanding index buffers. The code that renders the grid looks very similar to the triangle example:

 HRESULT Grid::Render(LPDIRECT3DDEVICE9 pDevice) {     // Setup our texture. Using textures introduces the texture stage states,     // which govern how textures get blended together (in the case of multiple     // textures) and lighting information. In this case, we are modulating     // (blending) our texture with the diffuse color of the vertices.     pDevice->SetTexture( 0, m_pTexture );     pDevice->SetTextureStageState( 0, D3DTSS_COLOROP,   D3DTOP_MODULATE );     pDevice->SetTextureStageState( 0, D3DTSS_COLORARG1, D3DTA_TEXTURE );     pDevice->SetTextureStageState( 0, D3DTSS_COLORARG2, D3DTA_DIFFUSE );    pDevice->SetStreamSource( 0, m_pVerts, 0, sizeof(COLORED_TEXTURED_VERTEX) );    pDevice->SetIndices(m_pIndices);    pDevice->SetFVF( D3DFVF_COLORED_TEXTURED_VERTEX );    pDevice->DrawIndexedPrimitive(       D3DPT_TRIANGLELIST , 0, 0, m_numVerts, 0, m_numPolys );    return S_OK; } 

You'll note the few extra calls to let the renderer know that the triangles in the mesh are textured, and that the texture is affected by the diffuse color of the vertex. This means that a black and white texture will take on a colored hue based on the diffuse color setting of the vertex. It's a little like choosing different colored wallpaper with the same pattern.

Materials

There's a lot more to texturing than the few calls you've seen so far. One thing you'll need to check out in DirectX 9 is materials. When you look at the structure of D3DMATEIRAL9, you'll see things that remind you of those color settings in vertex data:

 typedef struct _D3DMATERIAL9†{     D3DCOLORVALUE†Diffuse;     D3DCOLORVALUE†Ambient;     D3DCOLORVALUE†Specular;     D3DCOLORVALUE†Emissive;     float†Power; } D3DMATERIAL9; 

If the DirectX 9 renderer doesn't have any specific color data for vertices, it will use the current material to set the color of each vertex, composing all the material color information with the active lights illuminating the scene.

Gotcha

One common mistake with DirectX 9 is not setting a default material. If your vertex data doesn't include diffuse or specular color information your mesh will appear completely black. If your game has a black background, objects in your scene will completely disappear!

Other than the critical information about needing a default material and texture, the DirectX SDK documentation does a pretty fair job of showing you what happens when you play with the specular and power settings. They can turn a plastic ping pong ball into a ball bearing, highlights and everything.

Texturing

Back in Grid::Create I quietly included some texture calls into the code. Let's start with what I did to actually create the texture in the first place, and go through the calls that apply the texture to a set of vertices. The first thing you'll do to create a texture is pop into Photoshop, Paint, or any bitmap editing tool. That leaves out tools like Macromedia Flash or Illustrator because they are vector tools and are no good for bitmaps.

Go into one of these tools and create an image 128x128 pixels in size. Figure 9.14 shows my version.

click to expand
Figure 9.14: A Sample Texture.

Save the texture as a TIF, TGA, or BMP. If you are working in Photoshop you'll want to save the PSD file for future editing but our next step can't read PSDs. Open the DirectX Texture Tool and load your texture file. Save it as a DDS file where your game will be able to load it. Your game will load the texture by calling a DirectX utility function:

 if( FAILED( D3DUtil_CreateTexture(      pDevice, "Textures\\texture.dds", &m_pTexture ) ) ) {      return E_FAIL; } 

This function makes some assumptions about how you want to load your texture. The source code for this function is in the code created by the DirectX 9 Project Wizard:

 HRESULT D3DUtil_CreateTexture(    LPDIRECT3DDEVICE9 pd3dDevice, TCHAR* strTexture,    LPDIRECT3DTEXTURE9* ppTexture, D3DFORMAT d3dFormat ) {     HRESULT hr;     TCHAR strPath[MAX_PATH];     // Get the path to the texture     if( FAILED( hr = DXUtil_FindMediaFileCb(                     strPath, sizeof(strPath), strTexture ) ) )         return hr;     // Create the texture using D3DX     return D3DXCreateTextureFromFileEx( pd3dDevice, strPath,              D3DX_DEFAULT, D3DX_DEFAULT, D3DX_DEFAULT, 0, d3dFormat,              D3DPOOL_MANAGED, D3DX_FILTER_TRIANGLE|D3DX_FILTER_MIRROR,              D3DX_FILTER_TRIANGLE|D3DX_FILTER_MIRROR, 0, NULL, NULL,                                   ppTexture ); } 

If you ever thought that texturing was trivial the call to D3DXCreateTextureFromFileEX should make you think twice. If you look at the DirectX 9 documentation on this function, it's clear there's a lot to learn—way too much for an introduction.

There is one important concept, mip-mapping, that needs special attention. If you've ever seen old 3D games, or perhaps just really bad 3D games, you'll probably recall an odd effect that happens to textured objects as you back away from them. This effect, called scintillation, is especially noticeable on textures with a regular pattern, such as a black and white checkerboard pattern. As the textured objects recedes in the distance, you begin to notice that the texture seems to jump around in weird patterns. This is due to an effect called subsampling.

Subsampling

Assume for the moment that a texture appears on a polygon very close to its original size. If the texture is 128x128 pixels, the polygon on the screen will look almost exactly like the texture. If this polygon were reduced to half of this size, 64x64 pixels, the renderer must choose which pixels from the original texture must be applied to the polygon. So what happens if the original texture looks like the one shown in Figure 9.15?

click to expand
Figure 9.15: A 128x128 Texture with Alternating White and Black Vertical Lines.

This texture is 128x128 pixels, with alternating vertical lines exactly one pixel in width. If you reduced this texture in a simple paint program, you might get nothing but a 64x64 texture that is completely black. What's going on here?

When the texture is reduced to half its size, the na ve approach would select every other pixel in the grid, which in this case happens to be every black pixel on the texture. The original texture has a certain amount of information, or frequency in its data stream. The frequency above texture is the number of alternating lines. Each pair of black and white lines is considered one wave in a waveform that makes up the entire texture. The frequency of this texture is 64, since it takes 64 waves of black and white lines to make up the texture.

Subsampling is what occurs if any waveform is sampled at less than twice its frequency, in the above case any sample taken at less than 128 will drop critical information from the original data stream.

It might seem weird to think of a texture having a frequency, but they do. A high frequency implies a high degree of information content. In the case of a texture it has to do with the number of undulations in the waveform that make up the data stream. If the texture were nothing more than a black square it has a minimal frequency, and therefore carries only the smallest amount of information. A texture that is a solid black square, no matter how large, can be sampled at any rate whatsoever. No information is lost because there wasn't that much information to begin with.

In case you were wondering whether or not this subject of subsampling can apply to audio waveforms, it can. Let's assume you have a high frequency sound, say a tone at 11KHz. If you attempt to sample this tone in a WAV file at 11KHz, exactly the frequency of the tone, you won't be happy with the results. You'll get a subsampled version of the original sound. Just as the texture turned completely black, you're subsampled sound will be a completely flat line, erasing the sound altogether.

It turns out there is a solution for this problem, and it involves processing and filtering the original data stream to preserve as much of the original waveform as possible. For sounds and textures, the new sample isn't just grabbed from an original piece of data in the waveform. The data closest to the sample is used to figure out what is happening to the waveform, instead of one value of the waveform at a discrete point in time.

In the case of our lined texture above, the waveform is alternating from black to white as you sample horizontally across the texture, so naturally if the texture diminishes in size the eye should begin to perceive a 50% gray surface. It's no surprise that if you combine black and white in equal amounts you get 50% gray.

For textures, each sample involves the surrounding neighborhood of pixels—a process known as bi-linear filtering. The process is a linear combination of the pixel values on all sides sampled pixel—nine values in all. These nine values are weighted and combined to create the new sample. The same approach can be used with sounds as well, as you might have expected.

This processing and filtering is pretty expensive so you don't want to do it in real time for textures or sounds. Instead, you'll want to create a set of reduced images for each texture in your game. This master texture is known as a mip-map.

Mip-Mapping

Mip-mapping is a set of textures that have been pre-processed to contain one or more levels of size reduction. In practice, the size reduction is in halves, all the way down to one pixel that represents the dominant color of the entire texture. You might think that this is a waste of memory but it's actually more efficient than you'd think. A mip-map uses only 1/3 more memory than the original texture, and considering the vast improvement in the quality of the rendered result, you should provide mip-maps for any texture that has a relatively high frequency of information. It is especially useful for textures with regular patterns, such as our black and white line texture.

The DirectX Texture Tool can generate mip-maps for you. To do this you just load your texture and select Format, Generate Mip Maps. You can then see the resulting reduced textures by pressing PageUp and PageDn.

Gotcha

One last thing about mip-maps: As you might expect, the renderer will choose which mip-map to display based on the screen size of the polygon. This means that it's not a good idea to create huge polygons on your geometry that can recede into the distance. The renderer might not be able to make a good choice that will satisfy the look of the polygon edge both closest to the camera and the one farthest away. Most modern hardware has no problem with this, and selects the mip-map on a per-pixel basis. You can't always count on every player having modern hardware. Do some research before jumping in.

Also, while we're on the subject, many other things can go wrong with huge polygons in world space, such as lighting and collision. It's always a good idea to tessellate, or break up, large surfaces into smaller polygons that will provide the renderer with a good balance between polygon size and vertex count.

You might have heard of something called tri-linear filtering. If the renderer switches between one mip-map level on the same polygon, it's likely that you'll notice the switch. Most renderers can sample the texels from more than one mip-map, and blend their color in real time. This creates a smooth transition from one mip-map level to another, a much more realistic effect. As you approach something like a newspaper, the mip-maps are sampled in such a way that eventually the blurry image of the headline can resolve into something you can read and react to.




Game Coding Complete
Game Coding Complete
ISBN: 1932111751
EAN: 2147483647
Year: 2003
Pages: 139

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net