VERTEX SHADERS

Common Code for Vertex Shaders

Since all the vertex shaders require setup code that's one of a few variations on a theme, these components are collected and discussed a bit more in depth.

Component 1: Vertex Transform and Copy

Description This is the simplest of the setup codes; it is simply the matrix transformation of the incoming vertex position and it places a copy of the transformed vertex position into register r0 for use later in the shader. This is necessary since oPos is a write-only register.

Prerequisites Assumes that c[TRANS_MVP] points to the first register in the transposed model-view-projection matrix. Assumes that the vertex position is in register v0.

Summary When completed, this fragment will have placed the transformed vertex position (in homogenous clip-space coordinates) into oPos and r0.

 // code component 1 // emit projected position with a copy in r0 m4x4   r0, v0, c[TRANS_MVP] mov    oPos, r0 

Component 2: Vertex Transform and Copy, Normal Transform

Description This is the same as component 1 but also transforms the normal of the vertex by the inverse transpose model-view-projection matrix.

Prerequisites Assumes that c[TRANS_MVP]] points to the first register in the transposed model-view-projection matrix, and that c[TRANS_INV_MVP] contains the transpose of the inverse of the model-view-projection matrix. This, of course, can change depending upon whether you use uniform scaling matrix transformations or not, in which case you can use the same matrix as c[TRANS_MVP] for the normal transformation. See the section titled "Transforming Normal Vectors" in Chapter 4 for more information. Also assumes that the vertex position is in register v0 and that the vertex normal is in v3.

Summary When completed, this fragment will have placed the transformed vertex position (in homogenous clip-space coordinates) into oPos and r0, and a transformed and normalized vertex normal value in r1. Note the use of the dp3-rsq-mul sequence. You'll frequently see this when it's necessary to normalize a vector.

 // code component 2 // emit projected position with a copy in r0 m4x4   r0, v0, c[TRANS_MVP] mov    oPos, r0 // calculate a unit vertex normal // transform the normal m3x3   r1,     v3,    c[TRANS_INV_MVP] // calculate the length into r1.w dp3    r1.w,   r1,     r1 // calculate 1/sqrt(length) into r1.w rsq    r1.w,   r1.w // normalize it mul    r1,     r1, r1.w // r1=|n| 

Component 3: Vertex Transform and Copy, Normal Transform, Local View (Light) Vector

Description This is the same as component 2 but also takes the viewer position (or eye position) and creates a view direction normalized vector.

Prerequisites The same as component 2, with the assumption that the eye position is input in register c[EYE_POS] in clip-space coordinates since generally you'll have a unique eye per frame (i.e., you do the transform outside the shader).

Note This is the code for a "local viewer"; that is, the view direction is calculated per vertex. This code is exactly the same for a "local light"; the only difference is that you'd place the light position in clip coordinates instead of the eye position.

Summary When completed, this fragment will have placed the transformed vertex position (in homogenous clip-space coordinates) into oPos and r0, a transformed and normalized vertex normal value in r1, and the normalized view vector in r2.

 // code component 3 // emit projected position with a copy in r0 m4x4   r0, v0, c[TRANS_MVP] mov    oPos, r0 // calculate a unit vertex normal // transform the normal m3x3   r1,     v3,     c[INV_TRANS_MVP] // calculate the length into r1.w dp3    r1.w,   r1,     r1 // calculate 1/sqrt(length) into r1.w rsq    r1.w,   r1.w // normalize it mul    r1,     r1, r1.w // r1=|n| // create view direction vector in r2 // (If this was a light position we // could use it to create a light direction) add    r2,     r0,    -c[EYE_POS] // calculate the length into r2.w dp3    r2.w,    r2, r2 // calculate 1/sqrt(length) into r2.w rsq    r2.w,    r2.w // normalize it mul    r2, r2, r2.w // r2=|v| 

Constant Color Shading Shaders

This is the simplest vertex shader. This is the shader that you'd use to place a constant color on an object. The color is set before the object is rendered.

Uses Use this to generate a solid color object with no shading effects.

Description If you want an object to have the same color irrespective of lighting conditions or viewpoint, this shader will do it.

Prerequisites Object color is passed in as a constant, v0 is the vertex position.

Results The color is written to oD0 so that it can be used in the fixed function pipeline. The result of the shader is shown in Figure 7.1.

 // Constant Color Shading.vsh vs.1.0   // Shader version 1.0 // emit transformed position m4x4  oPos, v0, c[TRANS_MVP] // stick the color into the output diffuse register mov   oD0, c[COLOR] 

click to expand
Figure 7.1: Constant vertex color gives color to an object but no hint of depth.

Vertex Color Shading

This is the shader that you'd use to place a color on an object that can vary on a per vertex basis. The vertex color is passed in along with the vertex position.

Uses Use this to generate a multicolor object with no shading effects.

Description If you want an object to have a per vertex color that's irrespective of lighting conditions or viewpoint, this shader will do it.

Prerequisites Object color is passed as v4, v0 is vertex position.

Results The color is written to oD0 so that it can be used in the fixed function pipeline. The color might be interpolated or not depending upon the shading render state. The result of the shader is shown in Figure 7.2.

> // Vertex Color Shader.vsh vs.1.0    // Shader version 1.0 // emit transformed position m4x4  oPos, v0, c[TRANS_MVP] // stick the color into the output diffuse register mov   oD0, v4 

click to expand
Figure 7.2: Assigning a per vertex color gives more control at the expense of more stream data.

Ambient Shading

When you want to add ambient light to a scene, the ambient light is modulated by the material color of the vertex to create the final color.

Uses Use this to generate the effect of ambient illumination on an object.

Description Ambient light shader provides a constant illumination on an object irrespective of the viewer or the object's orientation. The ambient object color is modulated with the ambient light color.

Prerequisites Object color and light color are passed in as constants, v0 is vertex position.

Results The color is written to oD0 so that it can be used in the fixed function pipeline. The result of the shader is shown in Figure 7.3.

 // Ambient Shading.vsh vs.1.0    // Shader version 1.0 // emit transformed position m4x4  oPos, v0, c[TRANS_MVP] // multiply the ambient light color by the // ambient color of the object mov  r0, c[AMBIENT_LIGHT_COLOR] mul  oD0, c[AMBIENT_COLOR], r0 

click to expand
Figure 7.3: Ambient shading gives color plus shading, but the unilluminated areas are black.

Lambertian Diffuse Lighting with an Infinite Light

This shader uses the same math that the traditional fixed function pipeline uses to generate the diffuse lighting term and an infinite light. The diffuse light color and diffuse material color are specified. The diffuse colors are frequently the same as the ambient in many situations.

Uses Use this to generate the diffuse term for an infinitely distant light shining on a vertex. When combined with a Blinn-Phong specular term, you'll get the same results as the fixed function pipeline does.

Description This is the traditional expression for diffuse lighting. Since the light source is infinite, any objects using this shader with that light source will all be lit from the same side. Diffuse lighting doesn't depend upon the viewing angle, only upon the angle between the surface normal and the light direction. This shader will compute the light direction vector, normalize it, then compute the dot product of the light vector and the normal to get the intensity. The diffuse colors are then modulated by each other and the intensity.

Prerequisites All the assumptions for use of component 2. Material diffuse color is passed in as a constant. Light diffuse color is passed in as a constant. The light direction is passed in as a constant in clip-space coordinates—it should be normalized. v0 is vertex position. v3 is the normal.

Results The color is written to oD0 so that it can be used in the fixed function pipeline. The result of the shader is shown in Figure 7.4.

 // Diffuse Shader with infinite light vs.1.1 // transform the vertex m4x4 oPos, v0, c[TRANS_MVP] // store light vector (assume its normalized) mov r0,  c[LIGHT_VECTOR] // r0 = |l| // compute |n| dot |l| dp3 r5.x, r0, v1 // r5.x = n dot l // no need to clamp in next step // - it's done automatically //max r5.x, r5.x, c12.x // max(n dot l,0) // compute diffuse mov r10, c[MATERIAL_DIFFUSE] mul r11, r10, c[LIGHT_DIFFUSE] // r11 = diffuse material color times // diffuse light color // modulate by |n| dot |l| mul oD0, r11, r5.xxxx 

click to expand
Figure 7.4: Infinite lights are the most common type of lights. The light vector for all vertices is the same.

Lambertian Diffuse Lighting with a Positional Light

This shader uses the same math that the traditional fixed function pipeline uses to generate the diffuse lighting term and a positional (or local) light. The diffuse light color and diffuse material color are specified.

Uses Use this to generate the diffuse term for a local light shining on a vertex. When combined with a Blinn-Phong specular term, you'll get the same results as the fixed function pipeline does.

Description The only difference between this shader and the previous one is that this one has a positional light. Diffuse lighting doesn't depend upon the viewing angle, only the angle between the surface normal and the light direction. In this case, we've got our light position in world coordinates—the same as our vertex position and normal. Thus we can skip transforming them since we can create a light directional vector in world coordinates, and dot that with our normal, which is also in world coordinates. The angle between the normal and light direction will be the same as long as they are in the same coordinate system when we take the dot product.

Prerequisites All the assumptions for use of component 2. The material diffuse color is passed in as a constant. The light diffuse color is passed in as a constant. The light position is passed in as a constant in world coordinates. v0 is vertex position. v3 is the normal.

Results The diffuse color is written to oD0 so that it can be used in the fixed function pipeline. The result of the shader is shown in Figure 7.5.

 // Lambertian Diffuse Shader with local light vs.1.1 // transform the vertex m4x4 oPos, v0, c[TRANS_MVP] // compute the light vector // first light pos - vertex pos sub r0, c[LOGHTPOS], v0 // then normalize it dp3 r0.w, r0, r0 rsq r0.w, r0.w mul r0, r0, r0.w // r0 = |l| // compute |n| dot |l| dp3 r5.x, r0, v1 // r5.x = n dot l // no need to clamp in next step // - it's done automatically // compute diffuse mov r10, c c[MATERIAL_DIFFUSE] mul r11, r10, c7 // r11 = diffuse material color times // diffuse light color // modulate by |n| dot |l| mul oD0, r11, r5.xxxx 

click to expand
Figure 7.5: A positional light means calculating a unique light direction vector for each vertex. For light sources close to a surface, this overhead is sometimes necessary to get the light looking correct.

Blinn—Phong Specular Lighting

This shader uses the same math that the traditional fixed function pipeline uses to generate the specular lighting term. It uses the built-in l it function to calculate the lighting term.

Uses Use this to generate the specular term for a light shining on a vertex. When combined with a Lambertian diffuse term, you'll get the same results as the fixed function pipeline does.

Description This is the implementation of Blinn's simplification to Phong's lighting equation. It requires computation of the half-angle vector. The specular term generated is

where ms is the power value, usually referred to as the "shininess" parameter. This value can be in the range [0, 128]. And is the normalized half-angle vector, computed from the light direction.

Prerequisites All the assumptions for use of component 2. The material specular color is passed in a constant. The light specular color is passed in a constant. The light position is passed in a constant, with the specular power value tagging along in the w component of the constant.

Results The specular color is written to oD1 so that it can be used in the fixed function pipeline. This is a render state that's turned off by default. Figure 7.6 shows the specular only term, and then the specular term plus ambient and diffuse from previous shaders.

 // Phong-Blinn Specular vs.1.1 // transform the vertex m4x4 oPos, v0, c[TRANS_MVP] // compute the light vector // 1st light pos - vertex pos sub r0, c[LIGHT_POS]. v0 // then normalize it dp3 r0.w, r0, r0 rsq r0.w, r0.w mul r0, r0, r0.w // r0 = |l| // compute the view vector sub r1, c[EYE_POS], v0 dp3 r1.w, r1, r1 rsq rl.w, r1.w mul r1, r1, r1.w // r1 = |v| // compute half vector add r3, r1, r0 mul r3, r3, c12.yyyy // r3 = |h| // compute n dot l dp3 r5.x, r0, v1 // r5.x = n dot l max r5.x, r5.x, c12.x // max(n dot l ,0) // compute n dot v dp3 r5.z, r1, v1 // r5.z = n dot v max r5.z, r5.z, c12.x // max( n dot v, 0) // compute n dot h dp3 r5.y, r3, v1 // r5.y = n dot h max r5.y, r5.y, c12.x // max (n dot h, 0) // the lit instruction expects: // r5.x = n dot l // r5.y = n dot h // r5.z = ignored // r5.w = shininess value mov r5.w. c13.x // move shininess in lit r6.z, r5 // result is in r6.z // compute specular // multiply light color * material specular mov r10, c[MATERIAL_SPECULAR] mul r9, r10, c[LIGHT_SPECULAR] // attenuate by specular intensity mul oD0, r9, r6.zzzz 

click to expand
Figure 7.6: Blinn-Phong specular on the left and then the ambient, diffuse, and specular terms combined on the right.

Decal Texture Shader

A decal texture is a texture that's not affected by the color values of a vertex, but the texture itself is used to calculate the color of the vertex through use of the vertex's texture coordinates. Any color values from the vertex shader are ignored. For this example, we need to assume that when the shader is run there is a single texture active and that there is no computation using the vertex's color. This means that we're using a rendering setting of D3DTOP_SELECTARG1 for DirectX. Thus our first decal shader will simply transform the position vector (as all vertex shaders must do) and then pass along the texture coordinates from the vertex stream.

Uses Use this to apply a texture as a decal (i.e., no shading).

Description This shader will result in rendering an object where the intensity of the color is unaffected by any lighting conditions. You'd use this kind of effect when you were rendering textured objects in an environment where the lighting was constant and uniform, or when you always want the object to be seen, such as with a HUD or a user interface.

Prerequisites Texture 0 is valid and loaded. v0 is the vertex position and v7 is the texture coordinate. The texture used for the bowling pin is shown in Figure 7.7.

click to expand
Figure 7.7: The decal we're going to wrap the object with.

Results No color is computed for this vertex; the texture coordinates are passed to the FFP or pixel shader, and the texture is sampled. The result of the shader is shown in Figure 7.8.

 // Decal texture.vsh vs. 1.0    // Shader version 1.0 m4x4   oPos, v0, c4 // emit projected position // it's assumed that the texure blending is set // to D3DTOP_SELECTARG1 so we just decal the texure // and ignore any color calculations mov    oT0.xy  , v7    //copy texcoords 

click to expand
Figure 7.8: Decal texturing slaps a texture onto a surface with no other coloring effects.

Color Modulated Decal Shader

The advantage of the shader listed earlier is that we don't need to perform any lighting calculations on it—it simply shows up on top of the object. Unfortunately, most interesting environments aren't like that. We want our textures to blend into an object that's affected by the environment. The simplest effect on an object is taking into account the object's color—that is, the color of an object is blended or modulated (i.e., multiplied) by the texture.

Uses Use this to generate a multicolor object blended with a texture.

Description This shader performs a blend between a texture and a vertex color. In order for the graphics pipeline to know what we want it to do, we need to use the D3DTOP_MODULATE for DirectX. The only real difference in this shader is that we're calculating a color and sticking it in the output color register. If you don't tell the pipeline to use the color register, then calculating a color value will have no effect on the output (except to slow down the shader). The FFP will observe any texture blending states you may set. If a pixel shader were used, it would have to be written to do the blending.

We have a texture that is multiplied with our color, and this might or might not be what you want. Most of the time, if this is your first attempt to blend colors with textures, you are disappointed because the texture appears much darker than you'd expect. This is an effect of the color math since all of our colors are floating point values in the range [0,1]. If you take a texel intensity of 0.5 and a vertex intensity of 0.5 and multiply them, you get a color intensity that's 0.25–typically, not what you want.

There are various ways to get around this. If you're applying a texture over an entire object, you can increase the brightness of the base object color. If you're applying a texture to a small area, then you can make the texture overbright. DirectX has some blending operations that will double (D3DTOP_MODULATE2X) or quadruple (D3DTOP_MODULATE4X) the resulting colors while still clamping them to a maximum of 1.0. You can also fix this in the pixel shader.

Prerequisites Object color is passed in register v4. Texture 0 is valid and loaded. v0 is the vertex position, and v7 contains the texture coordinates.

Results The color is written to oD0 so that it can be used in the fixed function pipeline. The result of the shader is shown in Figure 7.9.

 // Vertex Color Shading.vsh vs.1.0    // Shader version 1.0 // emit transformed position m4x4   oPos, v0, c[TRANS_MVP] // stick the color into the output diffuse register mov  oD0, v4 mov  oT0.xy , v7   //copy texcoords // How these get blended will depend upon // render states and/or pixel shader code 

click to expand
Figure 7.9: To blend a texture with a surface color, you'll need to modulate the texture color.

Fog Shader

The value is the fog factor to be interpolated and then routed to the fog table. Only the scalar x-component of the fog is used. You cannot use vertex shader fog with a pixel shader unless you perform the calculations yourself. The oFog register bypasses the pixel shader code and is used after the pixel shader when fog calculations are processed by the pipeline.

Uses Use this to set per vertex fog values that will be processed by the pipeline after the pixel shader has run.

Description When applying vertex fog, fog calculations are applied at each vertex in a polygon, and then interpolated across the face of the polygon during rasterization. The fog value is clamped to the [0, 1] range when the shader exits.

In order to use fog, it must be enabled. In DirectX, this is a call to set the render state with the D3DRS_FOGENABLE parameter. Vertex fog must be used when setting the fog intensity using the vertex shader. You also should set the fog parameters and fog color as well.

 // Enable fog mode blending and color.    m_pd3dDevice-> SetRenderState(D3DRS_FOGENABLE, TRUE); // select linear fog and parameters    m_pd3dDevice-> SetRenderState(D3DRS_FOGVERTEXMODE,              D3DFOG_LINEAR);    Float FogStart - 0.2f, FogEnd = 0.9f;    m_pd3dDevice-> SetRenderState(              D3DRS_FOGSTART, (DWORD*)&FogStart);    m_pd3dDevice-> SetRenderState(              D3DRS_FOGEND, (DWORD*)&FogEnd ) // Set the fog color to red.    m_pd3dDevice->SetRenderState(D3DRS_FOGCOLOR,              D3DCOLOR_XRGB(255,0,0) ); 

Prerequisites The fog render states and parameters must be set.

Results You see that the object is rendered in a fog depth in Figure 7.10.

 // Fog Shader.vsh // This fog shader places the vertex into the fog depth // based upon the vertical height of the vertex. The // closer the vertex to y=0, the thicker the fog. The // object sticks out of the fog. vs.1.1 // transform the vertex m4x4 oPos, vO, c[TRANS_MVP] // compute the light vector // 1st light pos - vertex pos sub r0,   c[LIGHT_POS],   v0 // then normalize it dp3 r0.w, r0, r0 rsq r0.w, r0.w mul r0,   r0, r0.w    // r0 = |1| // compute n dot 1 dp3 r5.x, r0, v1      // r5.x = n dot 1 max r5.x, r5.x, c12.x // max(n dot 1,0) // output n dot 1 as the vertex color // this gives a gray color scheme mov oD0, r5.x // now compute z value (since RenderMonkey orients // with z = vertical) // shift vertices up by c6.z mov r0.z, v0.z add r0.z, r0.z, c6.z // c6.x is fog max position // compute fog max minus shifted vertex position // divided by fog max [0, 1] range // (max-vertex.z)/max rcp r3.x, c6.x mad r3.x, r0.z, -r3.x, c5.z // Fog.x is the fog value mov oFog.x, r3.x 

click to expand
Figure 7.10: Fog is done after the texture stage. In vertex shaders, you can just set the vertex fog intensity. Here, the fog is red and the pin is standing out of the fog.

Point Size Shader

Point primitives may sound fairly uninteresting, but they show their worth by the fact that they require very little in the way of generation or rendering. This allows you to generate a great deal of point primitives and thus add a great deal of "detail" to a scene. For this reason, point primitives are usually used in particle systems. Since the "point" is always oriented toward the viewpoint, you can always be sure that your point will be seen, and you can thus judge how dense to make the particles to get the effect you want.

Uses Use this shader to generate small point sprites. Since the maximum value of the point is fairly small, this limits using the point size as a replacement for a billboard.

Description Unlike the other registers, the point size register opts can be written only into the × channel (since it's really just a single scalar register). Also unlike the other registers, there's no clamping to a [1, 1] or [0, 1] range. The range maximum depends upon the current device, but the range will be [0, MaxPointSize]. Any values that are less than 1 that do not cover a pixel center will not get rendered.

Point sprites ignore any texture coordinate you might specify and allow you to use a complete texture to cover a point. Our first point size pixel shader will show how the point size can vary. We'll let the point size vary between 0 and 64. In your program, you'd probably want to vary the point size so that it'd be dependent upon the distance from the viewpoint. In this shader, we're going to generate the maximum value for our points by using some of the commands in the shader language. Since 64 is a power of 2, we can use part of the lit instruction to do this for us. We can take our constant register value of 2 (held in the c0.w channel) and use the mad instruction to generate the value 6. We can then pass the 2 and 6 constants to lit and let it generate 2^6 or 64 for us. We can then multiplay by our fractional clock time to vary this value over the [0,64] range and store the result into oPts.x.

Prerequisites The diffuse color and inverse matrix are stored in constants. There's a constant that's set per frame, with the y element being the fractional seconds in the range[0.1], plus a series of constants set in a register, with the y component being equal to 2.

Results You'll get a point sprite that ranges from 0 to 64 pixels across with a frequency of one second. The results of the shader are shown in Figure 7.11.

 // Point Size Shader.vsh vs.1.1 // This shader hooks RemderMokey's time-varying cos // value to c5 and multiplies that abs (value) // by MaxPointSize. This value is typically 64 pixels // clamped by the hardware. I set the color to a row of // the transformation matrix. m4x4 oPos, v0, c[TRANS_MVP] // semi-random output color mov oDO, c[TRANS_MVP+1] // output color max r0, c5, -c5 // abs(cos) // only the x component can be set // c4.x is MaxPointSize mul oPts.x, c4.x, r0.x 

click to expand
Figure 7.11: Point shaders are somewhat limited since point size has only a fixed maximum value.

Multitexture Shader

A multitexture shader is just a shader that applies two or more textures. Depending upon the blending, you can decide exactly how the two textures interact.

Uses Use this to apply a texture as a decal, with other textures modifying the initial texture, either as a dirt or scuff map, a specular map, or as a bump map.

Description This shader shows how to use multiple textures. The vertex shader sets up the usual things, but we've allowed it to hold a random value in the [0, 1] range that can be used to morph the texture coordinates of one of the textures. We reuse the original texture coordinates for the scuff map and make it randomly oriented around the y axis by moving the u coordinate. The offset is set outside the shader.

Prerequisites Texture 0 is valid and loaded. v0 is the vertex position and v7 are the texture coordinates. The texture used for the decal is shown in Figure 7.12.

click to expand
Figure 7.12: The bowling pin texture map.

With that texture used as the decal texture, we're going to apply a scuff texture. We're going to need texture wrapping enabled to allow for texture coordinates to go outside the [0,1] range. The texture we're going to use as the scuff map is shown in Figure 7.13.

click to expand
Figure 7.13: The scuff map.

Results The results are shown in Figure 7.14. You can see that in addition to our decal texture, we've added a scuff map to give it a beat-up appearance.

 // multitexture.vsh vs.1.0     // Shader version 1.0 m4x4  oPos, v0, c4 // emit projected position // copy decal texcoords mov   oT0.xy , v7    //copy texcoords // we want the scuff map to be randomly // oriented around pin, but only in the u direction // so we modify the u coordinate mov  r0, v7 add  r0.x, r0.x, c[OFFSET].x mov  oT1.xy , r0 // copy texcoords 

click to expand
Figure 7.14: The pin with the texture map and the scuff map added on top of it.

If you don't need any blending modes other than those provided by the FFP, then there's no need to use a pixel shader. One advantage of using a pixel shader is that the shader determines how the textures get blended so you generally don't have to worry about render states for the blending modes.

 // multitexture.psh ps.1.1   // Shader version 1.1 // sample the textures texld r0, t0.xy // decal map texld r1, t1.xy // scuff map add r0, r1 // subtract scuff colors 

Silhouette Edge Shader

This shader is a simple one for rendering silhouette edges. It's designed to be rendered on your object as a separate pass since all it does is render the silhouette edges. The actual shading of everything else needs to be done as another pass over the vertices. It requires alpha blending to be enabled.

Uses Use this to generate a silhouette edge for well-tessellated objects since this shader depends upon detecting when a vertex's normal is perpendicular to the view vector.

Description This shader assumes that when a vertex's normal is perpendicular (or nearly so) to the view direction, that it's part of a triangle that's on the edge of the object; hence you need a reasonably smooth object for this approach to work. Curved objects work better than objects with flat edges. The best objects contain curves that are all a similar degree of curvature since quickly curving surfaces provide fewer edge pixels.

The dot product of the view vector to the normal vector is calculated for a vertex. It's then rescaled from the [1,1] range to the [0,1] range and placed in the diffuse color's alpha channel. The degree of thickness is controlled by the alpha reference value set using the D3DRS_ALPHAREF state. The vertex's alpha value (range [0,1]) is then compared to the alpha reference value (range [0x00,0xFF]) to determine if the vertex should be rendered. Since this is done on the interpolated values in the alpha-blending stage this is a per pixel test.

Prerequisites All the assumptions for use of component 2. A constant is set up that contains the opaque (outline) color and some required range conversion values. The ModelView matrix is passed in along with the MVP matrix.

Since you'll render the internal shading in another pass, the vertex position for this pass must be performed in exactly the same manner as any other passes or else z fighting may occur. You need to enable alpha blending, specify the blending test, and set the silhouette edge threshold as dwSilhouetteAmount render state as in this code.

 m_pd3dDevice-> SetRenderState(D3DRS_ALPHATESTENABLE,                  TRUE); m_pd3dDevice-> SetRenderState(D3DRS_ALPHAFUNC,                  D3DCMP_GREATEREQUAL); m_pd3dDevice-> SetRenderState(D3DRS_ALPHAREF,                  dwSilhouetteAmount ); 

Results The diffuse color is written whenever the alpha threshold is crossed. Rendering all edges of the object, the output of the silhouette shader is shown in Figure 7.15.

click to expand
Figure 7.15: A simple silhouette shader.

 // Silhouette Edge Shader vs.1.0 // Shader version 1.0 // emit projected position // It's assumed that this is how you calculated // the position for any other pass of the object m4x4 oPos, v0, c[TRANS_MVP] // Now ModelView matrix // calculate normalized position vector m4x4   rO, vO. c[TRANS_MV] dp3    rO.w,   rO, rO rsq    rO.w,   rO.w mul    rO, rO, rO.w // r1=|p| // and normalized normal vector m3x3   r1, v3. c[TRANS_MV] dp3    r1.w,   r1, r1 rsq    r1.w,   r1.w mul    r1, r1, r1.w // r1=|n| // calculate normalized tangent vector dp3    r2.w,   rO,  r1 // convert it from the [-1,1] range to [0.1] range // and place it in the output alpha // c[CONST].x == 0.5 mad oDO.w, r2.w. c{CONST].x, c[CONST].x // place silhouette color in the rgb output mov oDO.xyz, c[SILHOUETTE_COLOR] 



Real-Time Shader Programming(c) Covering Directx 9. 0
Real-Time Shader Programming (The Morgan Kaufmann Series in Computer Graphics)
ISBN: 1558608532
EAN: 2147483647
Year: 2005
Pages: 104
Authors: Ron Fosner

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net