Determining Color by Blending Textures

In the real world, not very many applications will use simple pseudo-randomly generated colors when rendering models. Models are normally generated with highly detailed textures, and occasionally quite a few of them. What if there were a scenario where you needed to blend two or more textures for your model? Sure, you could do that with the fixed-function pipeline, but what if the model was something like "tiny.x", which most of the code in this book uses? This model only uses a single texture, so how will you use two textures blended together?

UNDERSTANDING THE PIXEL SHADER INSTRUCTION SET LIMIT

Since you will be manipulating a pixel shader in this code, one thing you may notice is that the amount of available "instructions" you can use in a 1.1 pixel shader code segment is extremely limited. For pixel shaders before version 1.4, you are limited to 12 instructions for the entire shader program, which obviously isn't very much. For version 1.4, the limit is raised to 28 (in two phases) but that's still not much. You are also limited to 8 constants in these shader programs. Pixel shader version 2.0 and higher has the capability to perform much more complex operations and have much higher instruction and constant counts. You can check the capabilities for your card to find out how many of each are supported.

Once again, for brevity, you should use the code example written in the previos chapter that rendered the textured mesh. This will focus on adding a second texture and writing the code to blend between the two textures to calculate a final color.

Knowing this limitation, the code for the texture blending here will be shown in two separate ways: one with an extremely simple pixel shader that can be used on cards that support versions 1.1, and another more complex version that can be used on cards that support version 2.0 and above. You will need a relatively new graphics card in order to support this feature set.

Naturally, you should start with the 1.1 version since it's obviously simpler. Before you get to the actual shader code, though, you will need to make a few updates to the actual code. For one thing, you need a second texture to do the blending. Add the following texture variable for your second texture into your class:

 private Texture skyTexture; 

Also, since you know that the model you will be loading only has one material and one texture, you may want to remove the need for using arrays of materials and textures. The code included on the CD has done this, and the remaining code for this example will assume it has been completed.

You will now need to create a second texture that will be used to blend into your model. This example will assume that you've copied the skybox_top.JPG texture that ships with the DirectX SDK into your application folder. Feel free to use a different texture that suits your mood if you want. Once you have decided on the second texture, you will need to create it. The perfect place would be where the original texture is created in the LoadMesh method. Replace the single texture creation line with this:

 // We have a texture, try to load it meshTexture = TextureLoader.FromFile(device, @"..\..\" +       mtrl[i].TextureFilename); skyTexture = TextureLoader.FromFile(device, @"..\..\skybox_top.JPG"); 

If you've chosen a different texture file, you will want to use that filename instead. With the second texture created, the code to do the blending between the two can now be written. Add the following variables to your shader code:

 float Time; Texture meshTexture; Texture skyTexture; sampler TextureSampler = sampler_state { texture = <meshTexture>;     mipfilter = LINEAR; }; sampler SkyTextureSampler = sampler_state { texture = <skyTexture>;     mipfilter = LINEAR; }; 

The Time variable has been used often enough that it doesn't need explaining. The two texture variables are new, though. Rather than calling SetTexture on the device to assign these textures to a stage, the shader code will maintain the textures and sample them appropriately. Feel free to remove your SetTexture calls from your application, since they will be redundant anyway.

The two sampler variables are used internally in the shader to determine how a texture is sampled. In this case, each texture is sampled based on the actual texture and uses a linear mipmap filter. You could also include items such as minify and magnify filters, along with various clamping types in these sampler states. For this case, the mipmap filter is enough.

Before you can use the two texture variables, you'll need to actually set them to something. In your main code's initialization routine, add the following after your mesh has been loaded:

 effect.SetValue("meshTexture", meshTexture); effect.SetValue("skyTexture", skyTexture); 

Your textures should have been created when your mesh was loaded, so they should be valid at this point. However, even though the model has been loaded and there are now two sets of textures that have been created, it doesn't eliminate a basic problem. The model itself only has a single set of texture coordinates. Use the following vertex shader program in place of the one already in use:

 // Transform our coordinates into world space void TransformV1_1(     in float4 inputPosition : POSITION,     in float2 inputTexCoord : TEXCOORD0,     out float4 outputPosition : POSITION,     out float2 outputTexCoord : TEXCOORD0,     out float2 outputSecondTexCoord : TEXCOORD1     ) {     // Transform our position     outputPosition = mul(inputPosition, WorldViewProj);     // Set our texture coordinates     outputTexCoord = inputTexCoord;     outputSecondTexCoord = inputTexCoord; } 

As you see here, the shader code accepts as inputs a position and a single set of texture coordinates. However, as outputs, it not only returns the transformed position, it also passes on the input texture coordinate to both sets of output texture coordinates. Essentially, the code has duplicated the texture coordinates for each vertex. Now you can replace the pixel shader program with the following:

 void TextureColorV1_1(  in float4 P : POSITION,  in float2 textureCoords : TEXCOORD0,  in float2 textureCoords2 : TEXCOORD1,  out float4 diffuseColor : COLOR0) {     // Get the texture color     float4 diffuseColor1 = tex2D(TextureSampler, textureCoords);     float4 diffuseColor2 = tex2D(SkyTextureSampler, textureCoords2);     diffuseColor = lerp(diffuseColor1, diffuseColor2, Time); }; 

Here the shader takes as inputs the position and both sets of texture coordinates that were returned from the vertex shader, and outputs the color that should be rendered for this pixel. In this case, you sample each of the two loaded textures (with identical texture coordinates), and then perform a linear interpolation on them (the lerp intrinsic function). The value of the Time variable will determine how much of each texture is actually visible at a time. You've never actually set the Time variable anywhere in your application, though. In your mesh drawing method, you can do that like this:

 effect.SetValue("Time", (float)Math.Abs(Math.Sin(angle))); 

You may be wondering something like "Why do the math here; wouldn't the shader be a better place to do this math?" You certainly want more of your math to be performed by the graphics card than by the CPU, since the graphics card will be much better at it. However, in this case the pixel shader instruction count has limited what you can do. Allowing the shader code to do the math for you here will not allow the shader to run on cards that only support pixel shader 1.1.

Before testing our texture blending, one more thing needs to be done. The function names for the vertex and pixel programs have been changed, so the technique is no longer valid. Replace with the following:

 technique TransformTexture {     pass P0     {         // shaders         VertexShader = compile vs_1_1 TransformV1_1();         PixelShader  = compile ps_1_1 TextureColorV1_1();     } } 

The technique name itself didn't change, so you should be able to run the application now. You should notice that the model is textured normally at startup, but quickly blends into a sky texture (assuming you used the texture this example has), and then blends back to the model texture. It repeats this until the application is closed.

What if your card supported pixel shader 2.0, though? The code here should be updated to support both the "older" pixel shader 1.1 hardware, as well as the new and improved pixel shader 2.0 hardware. First, add a variable to your main code so that you'll know if your card supports pixel shader 2.0:

 private bool canDo2_0Shaders = false; 

The example code will assume that the card does not support pixel shader 2.0 initially. After your device has been created in your initialization method, you should check to see if pixel shader 2.0 can be used. Make sure you do this before you've created your Effect object:

 canDo2_0Shaders = device.DeviceCaps.PixelShaderVersion >= new Version(2, 0); 

Since you'll be adding a second technique to handle the second set of shaders, you will need to determine which technique you will use based on the highest shader model you can support. Update this line with the following:

 effect.Technique = canDo2_0Shaders ? "TransformTexture2_0" : "TransformTexture"; 

In this case, if 2.0 shader models are supported, the application will use that technique; otherwise, it will use the technique you've already written. Before you actually write this new technique, though, there is one more update you should make. Replace the line that sets the Time variable with the following:

 if (canDo2_0Shaders) {     effect.SetValue("Time", angle); } else {     effect.SetValue("Time", (float)Math.Abs(Math.Sin(angle))); } 

Notice that with the more advanced shader program, the math will be moved to the graphics card? Since the instruction count is higher and can support the operations needed, this makes perfect sense. Now you'll need to add the actual new shader code, so add the code found in Listing 12.1 to your HLSL code.

Listing 12.1 Shader Model 2.0 Texture Blend
 // Transform our coordinates into world space void TransformV2_0(     in float4 inputPosition : POSITION,     in float2 inputTexCoord : TEXCOORD0,     out float4 outputPosition : POSITION,     out float2 outputTexCoord : TEXCOORD0     ) {     // Transform our position     outputPosition = mul(inputPosition, WorldViewProj);     // Set our texture coordinates     outputTexCoord = inputTexCoord; } void TextureColorV2_0(  in float4 P : POSITION,  in float2 textureCoords : TEXCOORD0,  out float4 diffuseColor : COLOR0) {     // Get the texture color     float4 diffuseColor1 = tex2D(TextureSampler, textureCoords);     float4 diffuseColor2 = tex2D(SkyTextureSampler, textureCoords);     diffuseColor = lerp(diffuseColor1, diffuseColor2, abs(sin(Time))); }; technique TransformTexture2_0 {     pass P0     {         // shaders         VertexShader = compile vs_1_1 TransformV2_0();         PixelShader  = compile ps_2_0 TextureColorV2_0();     } } 

The first thing you should notice is that the vertex shader has gotten much simpler. Instead of duplicating the texture coordinates, the code simply transforms the position and passes on the original set of coordinates. There is absolutely nothing fancy about the vertex program.

The pixel shader looks much simpler as well. It no longer expects two sets of texture coordinates, instead choosing to use the same set of texture coordinates to sample the two different textures. In older pixel shader models (anything before 2.0), the shader was only allowed to "read" the texture coordinates one time. Sampling two textures with the same set of coordinates would cause two reads and naturally not be allowed, so this type of operation wasn't possible with the previous shader. You should also notice that the same formula is used to determine the interpolation level regardless of how the math is done in the shader code. The only differences in the technique are the function names (obviously) and the fact that the pixel shader is compiled with the target of ps_2_0.

You shouldn't be able to tell the difference in the application regardless of which shader is actually running. Many applications today need to support the latest and greatest features of the graphics card to look visually stunning. However, until the minimum requirements rise quite a bit, there will need to be these fallback cases that approximate the visually stunning high-end machines the best they can.



Managed DirectX 9 Graphics and Game Programming, Kick Start
Managed DirectX 9 Kick Start: Graphics and Game Programming
ISBN: B003D7JUW6
EAN: N/A
Year: 2002
Pages: 180
Authors: Tom Miller

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net