So How Do Shaders Work?

Enough pictures, let’s do some coding. For the shader, you are going to write in FX Composer (see Chapter 6 for an introduction). You’ll use the asteroid textures you saw in Figure 7-8 previously.

If you like you can try to follow the steps described here and program your own Normal Map shader like the simple shader in the last chapter, but you don’t have to. You can just follow the shader code and play around with the complete vertex and Pixel Shader once you get through everything here. Normal Mapping is quite a cool effect, but fine-tuning can take a lot of time and it can be a lot of fun to tweak. Leave some tweaking options to the model artist and write some different shaders for Normal Mapping so he can choose which effect to use for which material. For example, metal should look different from stone or wood materials.

Start FX Composer and open the NormalMapping.fx shader file. The layout for the shader file is similar to the SimpleShader.fx file, but you use more annotations and you will see that future shaders in this book all use a similar file structure. This way it is easier to work with shaders from the C# code.

The basic file layout is:

  • Header comment and description of the shader

  • Matrices (world, worldViewProj, viewInverse, and whatever else is required)

  • Other global variables like time, factors, or testing values

  • Material data, first the material colors, then all used textures and samplers

  • Vertex structures, most importantly the VertexInput structure, which is always the same and uses the TangentVertex format you have also defined in your engine.

Then for each technique in the shader you have a code block usually separated by a comment line like //-----------------.

  • Vertex Shader

  • Pixel Shader

  • Technique that puts these shaders together, often using the same vertex shaders for different shader models over and over again.

To keep things simple I am just going to explain the vertex and Pixel Shader for the default Normal Mapping effect used for all the upcoming games in this book called Specular20. There is also a technique called Specular, which works the same way on shader model 1.1, but has some features turned off or reduced because of the eight instruction limit in Pixel Shader 1.1.

Please go through the shader file header and parameters after you open the file; they are basically in the same form as for SimpleShader.fx from the last chapter. What has changed since last chapter is the VertexInput format, which looks similar, but it expects a tangent input now. You already learned about the problems with .x and .fbx content files before and this will be solved later in this chapter. Assume for now that you have valid tangent data, which can be used in the shader. Luckily for you FX Composer always gives you valid tangent data for the standard test objects (sphere, teapot, cube, and so on).

  // Vertex input structure (used for ALL techniques here!) struct VertexInput {   float3 pos      : POSITION;   float2 texCoord : TEXCOORD0;   float3 normal   : NORMAL;   float3 tangent  : TANGENT; }; 

In SimpleShader.fx you simply started coding and once it worked you did not refactor the shader code anymore. This is totally fine, but the more shaders you write, the more you will think about reusing components. One way to do that is to define the most commonly used methods directly in the shader file:

  // Common functions float4 TransformPosition(float3 pos)//float4 pos) {   return mul(mul(float4(, 1), world), viewProj); } // TransformPosition(.) float3 GetWorldPos(float3 pos) {   return mul(float4(pos, 1), world).xyz; } // GetWorldPos(.) float3 GetCameraPos() {   return viewInverse[3].xyz; } // GetCameraPos() float3 CalcNormalVector(float3 nor) {   return normalize(mul(nor, (float3x3)world)); } // CalcNormalVector(.) // Get light direction float3 GetLightDir() {   return lightDir; } // GetLightDir() float3x3 ComputeTangentMatrix(float3 tangent, float3 normal) {   // Compute the 3x3 tranform from tangent space to object space   float3x3 worldToTangentSpace;   worldToTangentSpace[0] =     mul(cross(normal, tangent), world);   worldToTangentSpace[1] = mul(tangent, world);   worldToTangentSpace[2] = mul(normal, world);   return worldToTangentSpace; } // ComputeTangentMatrix(..) 

Another way would be to store methods or even parameters and constants in separate .fxh files, similar to C++ header files. I don’t like that approach because FX Composer can only show one source file at a time and you will most likely still change a lot of code at this stage.

The first common function is TransformPosition, which just converts the 3D vertex position in the vertex shader to the screen space. It works exactly the same way as in the last chapter except that you don’t have a combined worldViewProj matrix anymore. Instead you have a world matrix, which you had in SimpleShader.fx too, but the viewProj matrix is new. After multiplying world with viewProj you would have worldViewProj again, but there is a reason why this is not done in the code, but in the shader instead, which costs you one additional vertex shader instruction.

By separating the world matrix from the worldViewProj matrix you only have to worry about one single matrix that changes for every object, mesh, or custom 3D data you are going to render. The more data you have to pass from the program to the shader, the longer it takes and it can significantly slow down your rendering code if you set parameters over and over again and even start shaders too often per frame. It is much better to just set up the shader once and then render thousands of objects in a bunch by just changing the world matrix for each of them. You can take this idea even further by using an array of world matrices to store 20 or more matrices at once and then go through all of them. This technique is called instancing and is sometimes used by shooter games to optimize the performance even more if many objects of the same kind with the same shader are used. In earlier versions of Rocket Commander I had instancing implemented too, but it was too much work to get it to work in all shader models (1.1, 2.0, and 3.0) and also to support the fixed function pipeline. It did not matter anyway because the performance was quite good after all the other shader optimizations.

The other helper methods are pretty simple and you should understand them quickly just by looking at them, except for the last one, which is used to build the tangent space matrix I talked about. Check out Figures 7-4 and 7-5 again to see which vector is which. CalculateTangentMatrix is used in the vertex shader (like all the other helper methods) and it expects the tangent and the normal vector for each vertex point. Good thing you got these from the VertexInput structure. The binormal vector is generated by calculating the cross product between the normal and tangent vectors. You can reconstruct this matrix with your right hand (remember Chapter 5?) - the middle finger is the normal vector, the forefinger is the tangent, and the thumb represents the binormal, which is the cross product of the middle and forefinger and is in a 90 degree relationship with them because of that.

This tangent space matrix is very useful to quickly transform normals and the light direction from the world space to the tangent space, which is required for the Pixel Shader Normal Mapping calculation. The reason you need this tangent space matrix is to make sure all vectors are in the same space. This space lies directly on top of the polygon and points up (z) like the normal vectors do while x and y describe the tangent and binormal. Using this tangent space is the easiest and fastest way in the Pixel Shader and all you have to do to get to the correct order of the binormal and tangent that are used to construct this tangent space matrix. The order and even the cross product for the binormal might have to be reversed for a left-handed engine. Use unit testing to figure out which way is the correct one; you can also take a look at the shaders from the original Rocket Commander game (left-handed) to see the differences that were made in the XNA port (right-handed).

Vertex Shaders and Matrices

With all these helper methods it should be easy to write the vertex shader now. Before you do that you should define the VertexOutput structure first. I’m only going to explain the Pixel Shader 2.0 version here because the Pixel Shader 1.1 version is really complicated and uses a lot of assembly shader code, which is outside the scope of this book. Please pick up a good shader book to learn more about shader details and the shader assembly language if you really want to support Pixel Shader 1.1. I recommend the Programming Vertex and Pixel Shader, GPU Gems, or Shader X series books. Wolfgang Engel, a hardcore shader expert, is involved in most of them and you can learn a lot of shader tricks that were developed in the last few years by many professionals. This topic is even so big that a new programmer job is born: Shader Programmer.

If you are in a small team or even have to do all the programming yourself, this can be a hassle because you have to learn so much and you have very little time because the technology is advancing so fast. Try to keep it simple and only use the easiest shader model for you. It may be bad for some gamers that XNA does not support the fixed function pipeline and it also does not support Direct3D 10 shader model 4.0, but this way you can concentrate on just creating Pixel Shader 2.0 shaders (or maybe use Pixel Shader 3.0 too) and getting a game done.

Your vertex structure needs the screen space position, as always, and you have to pass the texture coordinates over to the Pixel Shader too so the diffuse and Normal Maps can be used correctly. Please note that you have to duplicate the texture coordinates in Pixel Shader 1.1 because each texture vertex input in the Pixel Shader can only be used once. This is one of the many problems with Pixel Shader 1.1; you also can’t do normalize there or even use the pow function. It is also harder to uncompress the compressed Normal Map back to a useful format. Good thing you don’t have to think about it here anymore.

  // vertex shader output structure struct VertexOutput_Specular20 {   float4 pos          : POSITION;   float2 texCoord     : TEXCOORD0;   float3 lightVec     : TEXCOORD1;   float3 viewVec      : TEXCOORD2; }; 

The lightVec and viewVec variables are just helpers to make the Pixel Shader computation a little bit easier. The lighting calculation is basically the same as in the previous chapter, but this time you have to calculate all vectors in the tangent space because it is easier to work in the tangent space than to convert all tangent space vectors to the world space. That makes sense because you have a lot more pixels than vertices. Converting every pixel with a complex matrix operation in the Pixel Shader would be too slow and just converting the light direction and the view vector in the vertex shader does not cost much time.

Take a look at the whole vertex shader code. The important part here is the use of the worldToTangentSpace matrix, which is calculated by the ComputeTangentMatrix method you saw before:

  // Vertex shader function VertexOutput_Specular20 VS_Specular20(VertexInput In) {   VertexOutput_Specular20 Out = (VertexOutput_Specular20) 0;   Out.pos = TransformPosition(In.pos);   // We can duplicate texture coordinates for diffuse and Normal Map   // in the Pixel Shader 2.0.   Out.texCoord = In.texCoord;   // Compute the 3x3 tranform from tangent space to object space   float3x3 worldToTangentSpace =     ComputeTangentMatrix(In.tangent, In.normal);   float3 worldEyePos = GetCameraPos();   float3 worldVertPos = GetWorldPos(In.pos);   // Transform light vector and pass it as a color (clamped from 0 to 1)   // For ps_2_0 we don't need to clamp from 0 to 1   Out.lightVec = normalize(mul(worldToTangentSpace, GetLightDir()));   Out.viewVec = mul(worldToTangentSpace, worldEyePos - worldVertPos);   // And pass everything to the pixel shader   return Out; } // VS_Specular20(.) 

Pixel Shader and Optimizations

The Pixel Shader now takes the vertex output and computes the light influence for each pixel. The first thing you have to do is to get the diffuse and Normal Map colors. The diffuse map just has the RGB values and maybe an alpha value if you use alpha blending (not really right now, but it is supported by this shader). The more complicated call is getting the normal vector from the compressed Normal Map. If you remember the way the compressed Normal Maps were built earlier with the NormalMapCompressor tool you can probably guess that you have to exchange the red and alpha channels again to make the RGB data valid and usable as an XYZ vector again. The first step to do this is to use a so-called swizzle, which takes the RGBA or XYZW data from a texture or a shader register and changes the order. For example, ABGR reverses the order of RGBA. In your case you just need the alpha channel (x) and the green (y) and blue (z) channels, so you use the .agb swizzle.

Then you use the formula described earlier to get the floating-point color values to vector values by subtracting 0.5 and dividing by 0.5, which is the same as multiplying by 2 and subtracting 1 (because 0.5 * 2 is 1). To fix any remaining compression errors you normalize the vector again, which takes one extra Pixel Shader instruction, but it is definitely worth it.

  // Pixel Shader function float4 PS_Specular20(VertexOutput_Specular20 In) : COLOR {   // Grab texture data   float4 diffuseTexture = tex2D(diffuseTextureSampler, In.texCoord);   float3 normalVector = (2.0 * tex2D(normalTextureSampler,     In.texCoord).agb) - 1.0;   // Normalize normal to fix blocky errors   normalVector = normalize(normalVector);   // Additionally normalize the vectors   float3 lightVector = In.lightVec;//not needed: normalize(In.lightVec);   float3 viewVector = normalize(In.viewVec);   // For ps_2_0 we don't need to unpack the vectors to -1 - 1   // Compute the angle to the light   float bump = saturate(dot(normalVector, lightVector));   // Specular factor   float3 reflect = normalize(2 * bump * normalVector - lightVector);   float spec = pow(saturate(dot(reflect, viewVector)), shininess);   //return spec;   float4 ambDiffColor = ambientColor + bump * diffuseColor;   return diffuseTexture * ambDiffColor +     bump * spec * specularColor * diffuseTexture.a; } // PS_Specular20(.) 

With this normalVector (still in tangent space) you can now do all the lighting calculations. The light vector and view vector are extracted from the VertexOutput_Specular20 structure. The light vector is the same for every vertex and pixel because you just use a directional light source, but the view vector can be very different if you are close to the object you are rendering. Figure 7-11 again shows why it is important to re-normalize this view vector like in the previous chapter. For Normal Mapping it is even more important because you calculate the light for every single pixel and the vectors can vary a lot from pixel to pixel through the variation in the Normal Map.

image from book
Figure 7-11

The bump value is calculated the same way you calculated the diffuse color in the previous chapter. For every normal pointing toward the light you use a bright color, and if it points away, it gets darker. Both the normal vector and the light vector are in tangent space and this way the basic formula stays the same. You can test out simple lighting effects in the SimpleShader.fx file and then re-implement them here together with Normal Mapping, which is cool because even though this shader is a lot more complex, the basic lighting calculation is still simple and exchangeable. If you take a look at the Normal Mapping and parallax mapping shaders of the next chapters you will see that this is the only place you make modifications; the rest of the Normal Mapping shader stays the same.

The last thing you have to calculate is the specular color component before putting the final color together. You don’t have a half vector yet, but you have the view vector in tangent space and you know in which way the normal points. The code uses a little bit simplified formula by just subtracting the light vector from two times the normal vector to generate a pseudo normalized half vector. Doing the same computation as in the SimpleShader.fx vertex shader would be too expensive in a Pixel Shader and the Normal Mapping shader looks fine with this calculation too, it does not really matter how exact the half vector is. The important thing is the pow method that makes the shiny effect small and you should finetune the shininess and specularColor values for each material because they greatly influence the way the final output looks like (see Figure 7-12).

image from book
Figure 7-12

Putting together the final color may look a little bit strange to you. The first two lines are pretty easy to understand. First you mix together the ambient color with the result of the bump value with the diffuse color. This operation is actually just one instruction in the shader and that’s the reason why it is written that way (it is easier to read the assembly output and use the same code for Pixel Shader 1.1 this way).

Then the specular color is multiplied by the spec value, which shows the highlights, but then again by the bump value and finally by the diffuseColor alpha value. The bump value makes sure that the specular value gets darker when you point away from the light and the diffuseColor alpha value helps you to fade out the specular highlights in case the texture is transparent here (alpha blending feature, not used in Rocket Commander).

That’s everything you have to know for now to get the Normal Mapping shader to work in FX Composer. Feel free to play around with it a little bit - change color values and use different textures and parameters. Now you have to worry about getting the correct tangent data in the application and the ability to easily import model files a model artist might have created for you.

Professional XNA Game Programming
Professional XNA Programming: Building Games for Xbox 360 and Windows with XNA Game Studio 2.0
ISBN: 0470261285
EAN: 2147483647
Year: 2007
Pages: 138

Similar book on Amazon
XNA Game Studio 4.0 Programming: Developing for Windows Phone 7 and Xbox 360 (Developer's Library)
XNA Game Studio 4.0 Programming: Developing for Windows Phone 7 and Xbox 360 (Developer's Library)
Learning XNA 3.0: XNA 3.0 Game Development for the PC, Xbox 360, and Zune
Learning XNA 3.0: XNA 3.0 Game Development for the PC, Xbox 360, and Zune
Beginning XNA 2.0 Game Programming: From Novice to Professional (Expert's Voice in Game Programming)
Beginning XNA 2.0 Game Programming: From Novice to Professional (Expert's Voice in Game Programming)
Microsoftu00ae XNAu00ae Game Studio 3.0: Learn Programming Now! (Pro - Developer)
Microsoftu00ae XNAu00ae Game Studio 3.0: Learn Programming Now! (Pro - Developer) © 2008-2017.
If you may any questions please contact us: