Up to this point, the rendering process has been concerned with the shape of the objects that will be rendered. Now it is time to discuss how the shapes will look from the color aspect. A number of factors determine the color of each pixel as it is rendered to the screen. These factors include the material applied to the triangle, any texture applied to the triangle, the current light shining on the object, and any current fog effects.
Objects may have four basic color properties ”diffuse, ambient, specular, and emissive :
Diffuse color: The basic color for a particular portion of the object
Ambient color: The minimum level of brightness of the color
Specular color: The shininess of the color (i.e., how reflective the color is)
Emissive color: The color that the object emits at a particular point
This color information is held in the vertices of the object. If all three vertices of a triangle have the same color value, then that triangle will have that same color. If the color values at the vertices differ , then the color of the triangle will transition from one color to the next as we look across the triangle from one vertex to another.
Another factor that determines the color at a given pixel in a triangle is the type of shading that is being performed. There are two shading methods that we can select from in the standard pipeline: flat and Gouraud. The flat shading method is the quickest and uses the color value at the first vertex for the entire polygon. The drawback with this method is that the outline of each polygon tends to be quite visible. Unless you desire this look to achieve a particular effect, I recommend against using flat shading.
The preferred method is Gouraud shading. This method uses the color data at each vertex as well as the vertex normal and lighting data to determine the colors to use. It also interpolates the colors between the vertices. The end effect is that the object looks smoother and more natural. Since the shading is being done in the video card anyway, this is the way to go.
Vertex coloring, or material coloring, as it is sometimes referred to, is one way to color our objects. Another method that provides much better apparent level of detail is texturing. Texturing is the process by which a portion of a bitmap is applied to each triangle. The image on the bitmap supplies the added detail. There are also alternate texturing options that apply predetermined lighting and bump information to the polygon. This allows the flat surface to appear to have shadows and an irregular surface. Depending on the capabilities of your video card, it is possible to combine up to eight textures to a polygon in a single pass. Each texture may also have multiple mipmaps . Mipmaps are a set of versions of a given texture at different resolutions . Each mipmap level is half the size of its parent texture. If mipmapping is being used, the smaller textures are automatically used as the textured polygons move further from the eye point.
This multitexturing is accomplished using texture stages. A texture is set to each stage through the SetTexture method as well as the stage index (which is zero based). A texture blending operation is also set for each stage using the SetTextureStageState method. The blending operation specifies how each texture is combined with the previous stage to produce the new texture that will be passed to the next stage or the renderer. The operation applied to the first stage typically specifies how the first texture is combined with the material color of the polygon. Table 3 “2 lists the various texture stage operations and a brief description of what they do, and Table 3 “3 lists the states that may be set for each stage.
OPERATION | DESCRIPTION |
---|---|
ADD | Adds the components of the two arguments |
ADDSIGNED | Adds the components of the two arguments and subtracts 0.5 |
ADDSIGNED2X | Functions the same as ADDSIGNED, with a 1 bit shift left |
ADDSMOOTH | Performs the operation (argument 1+argument 2) ˆ’ (argument 1argument 2) |
BLENDCURRENTALPHA | Blends arguments using the alpha from the previous stage |
BLENDDIFFUSEALPHA | Blends the arguments using alpha data from vertices |
BLENDFACTORALPHA | Blends arguments using a scalar alpha |
BLENDTEXTUREALPHA | Blends arguments using alpha data from this stage texture |
BLENDTEXTUREALPHAM | Blends arguments using a premultiplied alpha |
BUMPENVMAP | Performs per-pixel bump mapping without luminance |
BUMPENVMAPLUMINANCE | Performs per-pixel bump mapping with luminance |
DISABLE | Disables output from the texture stage |
DOTPRODUCT3 | Modulates all color channels including alpha |
LERP | Performs linear interpolation between two arguments proportioned by a third argument |
MODULATE | Multiplies the components of the two arguments |
MODULATE2X | Multiplies the two arguments and shifts left 1 bit |
MODULATE4X | Multiplies the two arguments and shifts left 2 bits |
MODULATEALPHA_ADDCOLOR | Modulates the color of the second argument with the alpha from the first argument |
MODULATECOLOR_ADDALPHA | Modulates color of the arguments and adds alpha from the first argument |
MODULATEINVALPHA_ADDCOLOR | Functions the same as MODULATEALPHA_ADDCOLOR, but uses inverse of alpha |
MODULATEINVCOLOR_ADDALPHA | Functions the same as MODULATECOLOR_ADDALPHA, but uses inverse of color |
MULTIPLYADD | Multiplies the second and third argument and adds in the first argument |
PREMODULATE | Modulates this texture stage with the next stage |
SELECTARG1 | Uses the first color or alpha argument as the output |
SELECTARG2 | Uses the second color or alpha argument as output |
SUBTRACT | Subtracts the components of argument 2 from argument 1 |
OPERATION | DESCRIPTION |
---|---|
ADDRESSU | Specifies the addressing mode for the U coordinate (wrap, clamp, etc.) |
ADDRESSV | Specifies the addressing mode for the V coordinate (wrap, clamp, etc.) |
ADDRESSW | Selects the texture addressing method for the W coordinate |
ALPHAARG0 | Selects the setting for the third alpha argument for triadic operations |
ALPHAARG1 | Designates the first argument for the alpha operation |
ALPHAARG2 | Designates the second argument for the alpha operation |
ALPHAOP | Specifies that this stage is a texture-alpha blending operation |
BORDERCOLOR | Specifies the color to use for out-of-range texture coordinates |
BUMPENVLOFFSET | Indicates offset value for bump map luminance |
BUMPENVLSCALE | Indicates scale value for bump map luminance |
BUMPENVMAT00 | Specifies the [0] [0] coefficient for the bump mapping matrix |
BUMPENVMAT01 | Specifies the [0] [1] coefficient for the bump mapping matrix |
BUMPENVMAT10 | Specifies the [1] [0] coefficient for the bump mapping matrix |
BUMPENVMAT11 | Specifies the [1] [1] coefficient for the bump mapping matrix |
COLORARG0 | Selects the setting for the third color argument for triadic operations |
COLORARG1 | Designates the source for the first argument for the operation |
COLORARG2 | Designates the source for the second argument for the operation |
COLOROP | Specifies texture-color blending operation (values for which appear in Table 3 “2) |
MAGFILTER | Specifies the type of filtering to use for texture magnification |
MAXANISOTROPY | Specifies the maximum level of anisotropy |
MAXMIPLEVEL | Specifies the highest level of mipmap that will be used |
MINFILTER | Specifies the type of filtering to use for texture minification |
MIPFILTER | Specifies the type of filtering to use between mipmap levels |
MIPMAPLODBIAS | Specifies a bias factor in selecting mipmap levels |
RESULTARG | Selects the destination for the stage s operation (default is the next stage) |
TEXCOORDINDEX | Specifies which texture coordinates out of a possible eight sets to use |
TEXTURETRANSFORMFLAGS | Controls the transformation of texture coordinates for this stage |
The texture coordinates stored at each of the polygon s vertices determine the portion of the resulting texture that is applied to the polygon. The texture coordinates are stored in the U and V variables of the vertex. Think of U as the texture X coordinate and V as the texture Y coordinate. The texture coordinates are floating-point numbers in the range of 0.0 to 1.0. With this range, it does not matter if the texture is 1616 pixels or 512512 pixels. The portion of the image specified by the coordinates will be applied to the polygon. There are also filtering options that can be set to specify how the images are compressed or expanded to fit the size of the polygon. It is usually safe to let DirectX determine which filtering method to use. If the default method is causing problems, we will then need to determine what other method (if any) provides a better result.
Note | Notice in the preceding paragraph that the texture sizes are a power of two. This is important! Video cards tend to expect that textures will always be a power of two in size. Although some texture operations appear to accept odd-sized textures, it is not recommended that you use odd- sized textures. Internally the operations are resizing the textures to be a power of two. Make it a practice to create your textures in power-of-two sizes, and you will stay out of a lot of trouble. |
Although we have applied our texture to the polygon, we still do not have the final colors that will be rendered to the screen. Now we need to consider lighting effects. The ambient light color and intensity, as well as any other light specified in the scene, are combined with the pixel colors based on the difference between the face normal and the direction of any directional lights.
Three types of lights can be defined in our scene: point lights, directional lights, and spotlights :
Point lights radiate in all directions from a specified location (like the light emitted from a lightbulb ).
Directional lights behave as if they are at an infinite distance in a given direction (like sunlight).
Spotlights produce a cone of light similar to a car s headlights.
For all of these lights, we are able to define diffuse, specular, and ambient colors just like we did for the vertices. The color of each light shining on a surface is added to the color value of the surface to calculate the resulting color.
It is possible that we still do not have the final color for our pixels. If we have defined fog for the scene, it makes a final adjustment to the color values. Fog works by fading the color of objects from their otherwise calculated values to the fog color as a function of the distance from the viewpoint. The exact rate of transition to the fog color is based on the minimum fog distance, the maximum fog distance, and the falloff formula for the fog. All of these attributes are defined when we set up fog for our scene. You will learn the details of lighting and fog in Chapter 7.