Section 6.2. Lighting and Shadows with Texture


6.2. Lighting and Shadows with Texture

OpenGL's lighting features, described in Chapter 4, "Lighting," provide lighting and shading effects suitable for many applications. OpenGL lighting isn't a complete solution for all applications, however. This section describes some of the many ways you can use texture mapping to improve the appearance of lit geometry.

Perhaps the most noticeable limitation of OpenGL lighting is the absence of shadows. Most graphics programmers are accustomed to this behavior, because it's the 3D graphics industry-standard lighting model. Shadows provide important visual cues, however, and add to scene realism. The sections "Static Lighting," "Light Maps," and "Depth Maps" later in this chapter all discuss how to use texture mapping to add shadows to your scene.

If your application renders a surface with both a specular highlight and a texture map, the texture map will mute the specular highlight. To learn how to compensate for this artifact of the rendering pipeline, see the section "Specular Highlights" later in this chapter.

OpenGL lighting requires high-resolution geometry to produce acceptable specular highlights, which is unacceptable for applications that are geometry limited. The section, "Environment Maps" later in this chapter describes how to use cube map textures to produce accurate specular highlights on low-resolution geometry.

6.2.1. Static Lighting

One way to add shadows to your scene is to encode them in a texture image. You can use any offline process or tool to create the texture image and apply the image as a texture map at runtime. In terms of complexity and performance, application code for applying static lighting textures is identical to any texture mapping code.

The TextureMapping example code demonstrates applying a texture with a precomputed shadow. The example loads a file containing elevation and image data, and renders that data as a section of texture mapped terrain. Figure 6-5 and Plate 3 show output of the example.

Figure 6-5. A screenshot from the TextureMapping example program, which renders elevation data and imagery for a section of the Teton mountain range in Wyoming. Source: SimAuthor.


The output of the example code exhibits lighting and shading effects, but these are actually part of the texture image itself. In this case, the texture image shadow effects were generated with an offline tool based on the elevation data. Figure 6-6 and Plate 4 show the texture image used in the TextureMapping example.

Figure 6-6. The texture image used by the TextureMapping example program. Note that the image itself already contains lighting and shading effects. Source: SimAuthor.


Because the texture image already contains lighting effects, the example doesn't use OpenGL lighting as described in Chapter 4, "Lighting." It also uses GL_REPLACE for the texture environment mode, because the texture color replaces, rather than modulates, the existing primary color.

The main example code in TextureMapping.cpp is self-explanatory. It instantiates a Terrain object, described in Terrain.h and implemented in Terrain.cpp. The Terrain class has an ogld::Texture member variable to manage the texture map.

To render the elevation data, Terrain uses a HeightField class. Derived from ogld::Plane, HeightField simply replaces the all-zero z values generated by the base class with values derived from the elevation data. The Terrain class draw() method renders the image by enabling texture mapping with a call to glEnable( GL_TEXTURE_2D ), applying the texture, and drawing the HeightField.

6.2.2. Light Maps

Modifying an existing texture image with lighting, shading, and shadow effects is suitable only for static scenes in which the light source doesn't move. To handle dynamic scenes, your application needs to keep shadows in a separate texture image and use multitexturing to apply it to textured geometry. This allows your application to update the light map without affecting the color values in underlying textures.

The ProjectedShadows demo, available from this book's Web site, demonstrates one way to do this. The example renders shadows from a cylinder, sphere, and torus onto an underlying plane. It renders the shadows and lighting effects as they would appear on the plane and then uses the rendered image of the shadows as a texture on the plane in the final image. The example uses multitexturing to apply the GL_LUMINANCE light-map texture onto a plane textured with another texture image. The example updates the light map when the user changes the light position by rendering the lighting and shadow effects into the back buffer and then copying the contents to the light-map texture object.

Using a simple linear-algebra technique (Savchenko 2005), the Projectedshadows example renders an orthographic view of shadows projected onto a plane. After reading that image out of the framebuffer, it renders the scene from the camera viewpoint, applying the previously rendered image as a luminance texture on a plane. Figure 6-7 and Plate 5 show the final result.

Figure 6-7. Screen shot from the ProjectedShadows example program.


Figure 6-8 shows just the light map, which contains both lighting and shadow information. To create this image, the ProjectedShadows example sets an orthographic projection that shows the plane from a top-down view. Next, it sets a model transformation to project the geometry onto that plane. It renders a high-resolution plane to capture as much lighting information as possible. Finally, with depth test and the light source disabled, it renders the sphere, cylinder, and torus, which are projected onto the plane.

Figure 6-8. The light map with projected shadows from the ProjectedShadows example program.


After rendering, the light map is in the back (undisplayed) buffer. Without swapping buffers, the ProjectedShadows example copies the light map out of the framebuffer and into a texture object using the glCopyTexImage2D() command.


void glCopyTexImage2D( GLenum target, GLint level, GLenum internal-
  format,
 GLint x, GLint y, GLsizei width, GLsizei height, GLint border );


Copies a region of the framebuffer into a texture object. Like glTexImage2D(), glCopyTexImage2D() defines a texture, but obtains the texture image data from the framebuffer.

target must be GL_TEXTURE_2D or one of the six cube map faces (see "Environment Maps" later in this chapter). level, internalformat, and border are the same as in glTexImage2D(). x, y, width, and height define the region of the framebuffer to copy.

Just as in glTexImage2D(), width and height must be a power of 2 unless the OpenGL version is 2.0 or later.

The ProjectedShadows example specifies an internalformat of GL_LUMINANCE, which tells OpenGL to copy and store a single luminance value out of the framebuffer.

When the final image is rendered, the plane is the only geometry that requires special treatment. It uses multitexturing to display both a texture image of petroglyphs using texture unit GL_TEXTURE0 and the light map using texture unit GL_TEXTURE1. Because the light map already contains sufficient light information, the plane rendered in the final image is low-resolutiononly four vertices. This technique improves performance for geometry-limited applications.

Because the ProjectedShadows example creates shadows only for the ogld::Plane primitive, it doesn't render shadows from the cylinder, sphere, and torus cast onto one another. OpenGL, however, provides a better technique: depth maps, described in the next section.

6.2.3. Depth Maps

The depth-map algorithm (Williams 1978), also known as shadow mapping, is a general-purpose shadow solution that elegantly supports self-shadowing. The algorithm is well suited for nonphotorealistic shadows. In its classic form, however, the depth-map algorithm suffers from aliasing artifacts and, therefore, isn't suitable for photorealistic rendering. Pixar developed the percentage-closer filtering algorithm (Reeves 1987) to eliminate these artifacts and successfully used the depth-map algorithm in films such as Luxo Jr.

Depth maps require multiple rendering passes. To use the algorithm, first render the scene as viewed from the light source and copy the resulting depth buffer into a texture object. To prepare for the second rendering pass, configure OpenGL to generate texture coordinates in the same coordinate space as the depth map, and also set depth compare parameters to compare generated texture-coordinate r values against stored depth-map values. When you render the scene a second time from the viewer's position, a generated r value greater than the corresponding stored depth value indicates that the corresponding fragment is in shadow.

As an example, see the DepthMapShadows program available on this book's Web site. Figure 6-9 and Plate 6 show the final rendered image.

Figure 6-9. A screen shot from the DepthMapShadows example program.


The DepthMapShadows example updates the depth-map texture only if the light source's position has changed since the last frame. To create the depth map, the program clears the depth buffer; the color buffer isn't required to produce a depth map and, therefore, is irrelevant. The code creates a view by using the light position as the viewpoint and then renders the scene. It renders only geometry that actually casts shadows; rendering the plane isn't necessary, because the scene doesn't contain any geometry shadowed by the plane. Also, because the algorithm requires only depth values, the code disables lighting and texture mapping. Finally, the code copies the rendered depth map into the texture object with a call to glCopyTexImage2D():

 glCopyTexImage2D( GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, 0, 0, w, h, 0 ); 


The depth map is essentially a normalized coordinate-space representation of the scene as viewed from the light, where x and y values range horizontally and vertically over the image and look up depth values stored in the depth buffer. Figure 6-10 shows the depth map displayed as a grayscale image.

Figure 6-10. The depth map from the DepthMapShadows example program shows light and dark values corresponding to greater and lesser distances from the light source.


After creating the depth map, the DepthMapShadows program configures OpenGL to generate texture coordinates in the same coordinate space as the depth map. Generated s, t, and r values must be in the range 0.0 to 1.0. Visualize this as texture coordinates in normalized device-coordinate space as viewed from the light in the range 1.0 to 1.0 for all axes. Then scale that by 0.5 in all axes, and translate it by 0.5 in all axes. The result is texture coordinates in the range 0.0 to 1.0.

Because the depth map is a perspective image, you also need to generate a fourth texture coordinate, q. OpenGL divides s, t, and r values by q, so applications use q to perform perspective division for cases such as depth mapping.

The DepthMapShadows program creates a matrix that transforms OpenGL eye coordinates into depth-map coordinates by concatenating the following matrices:

R = TSPLVL


T translates by 0.5 in x, y, and z, and S scales by 0.5 in x, y, and z. PL and VL are the same projection and view matrices used to create the depth map. Each row of the result matrix R is a plane equation for generating s, t, r, and q coordinates. The DepthMapShadows example passes these rows as parameters when configuring texture-coordinate generation, as the following code shows:

 GLdouble m[16]; // Temporarily use the model-view matrix to create //   the texture coordinate transform glMatrixMode( GL_MODELVIEW ); glPushMatrix(); glLoadIdentity(); glTranslatef( .5f, .5f, .5f ); glScalef( .5f, .5f, .5f ); gluPerspective( fov, aspect, 2., 200. ); lightView.multMatrix(); glGetDoublev( GL_MODELVIEW_MATRIX, m ); glPopMatrix(); // m now contains the plane equation values, but separate //   values in each equation aren't stored in contiguous //   memory. Transpose the matrix to remedy this. ogld::matrixTranspose( m ); // Set the active texture unit. ogld::glActiveTexture( depthMapTexture.getUnit() ); // Specify the texture coordinate plane equations. glTexGendv( GL_S, GL_EYE_PLANE, &(m[0]) ); glTexGendv( GL_T, GL_EYE_PLANE, &(m[4]) ); glTexGendv( GL_R, GL_EYE_PLANE, &(m[8]) ); glTexGendv( GL_Q, GL_EYE_PLANE, &(m[12]) ); 


Next, you need to configure the depth-map texture object to perform the depth comparison. Do this by setting the parameter GL_TEXTURE_COMPARE_MODE to GL_COMPARE_R_TO_TEXTURE. This tells OpenGL the operands of the comparison. You also need to specify the operator by setting GL_TEXTURE_COMPARE_FUNC to a comparison enumerant such as GL_LEQUAL. After comparing the generated r value with the stored depth-map value, OpenGL outputs a Boolean value to indicate the result of the comparison. The following code, for example, tells OpenGL to output TRUE if the r value is less than or equal to the stored depth-map value (indicating that the corresponding fragment is lit) and FALSE otherwise (indicating shadow):

 glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE,         GL_COMPARE_R_TO_TEXTURE ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL ); 


You also need to tell OpenGL what to do with the Boolean result. As shown earlier in Table 6-1, you can set GL_DEPTH_TEXTURE_MODE to GL_LUMINANCE, GL_INTENSITY, or GL_ALPHA. When GL_DEPTH_TEXTURE_MODE is GL_LUMINANCE, a FALSE r compare result produces shadow, whereas a TRUE result produces a lit value. Zero luminance is inadequate for scenes that contain ambient lighting, however, because OpenGL renders shadowed fragments without ambient light in that case.

The DepthMapShadows example preserves ambient lighting in shadows but requires a third rendering pass.[3] After creating the depth map, the code renders the final scene twice: once to produce lit fragments and again to produce shadowed fragments with correct ambient lighting. To do this, the code configures OpenGL to send the Boolean r compare result to the fragment's alpha component with the following code:

[3] The GL_ARB_shadow_ambient extension preserves ambient lighting without requiring a third pass.

 glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_ALPHA ); 


It also configures the OpenGL alpha-test feature to discard fragments with zero alpha. The following code shows how DepthMapShadows renders the final scene:

 // Render unshadowed / lit fragments depthMapTexture.apply(); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL ); glTexParameteri( GL_TEXTURE_2D, GL_DEPTH_TEXTURE_MODE, GL_ALPHA ); drawScene( false ); // Disable the light and render shadowed / unlit fragments depthMapTexture.apply(); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_COMPARE_FUNC, GL_GEQUAL ); glDisable( GL_LIGHT0 ); drawScene( false ); 


In the first pass, OpenGL changes the fragment alpha to 1.0 if the r compare indicates a lit fragment. Otherwise, OpenGL sets the alpha to 0.0. Because the alpha test discards fragments with 0.0 alpha, this code renders only lit fragments. The second pass does just the opposite: It sets GL_TEXTURE_COMPARE_FUNC to GL_GEQUAL, so lit fragments get 0.0 alpha, causing the alpha test to discard them. With the light source disabled, OpenGL renders the shadow fragments using only ambient light.

This algorithm can't produce soft shadows or penumbra effects. The Boolean result of the r compare indicates only that the fragment is lit or shadowed. As a result, depth-map shadows suffer from significant aliasing, as visible in Figure 6-9 earlier in this chapter. The DepthMapTextures example tries to minimize this aliasing in two ways:

  • The code uses the largest portion of the window possible to produce the depth map. Larger windows increase the resolution of the resulting depth map.

  • When setting the projection matrix before rendering the depth map, the code uses a custom field of view based on light distance from the scene. When the light is far away, a narrow field of view uses available screen real estate more efficiently and produces a higher-resolution depth map than a wider field of view would produce.

Finally, your application needs to safeguard against differences between the generated r values and the stored depth values in the depth map. The generated r value in the final image rarely matches the depth value retrieved from the depth map. Reeves solves this with percentage closer filtering (Reeves 1987).[4] The DepthMapShadows example, however, hides much of the aliasing artifacts with the depth offset feature. When creating the depth map, the example code uses depth offset to push the depth values back into the depth buffer. As a result, the depth map contains depth values that are biased slightly larger than the generated r values. The depth offset must balance hiding inherent aliasing artifacts against displacing the shadows; too little bias results in aliasing, whereas too much bias produces incorrect shadow effects.

[4] Applications can implement percentage-closer filtering by using fragment shaders (Gerasimov 2004).

6.2.4. Specular Highlights

OpenGL uses GL_TEXTURE_ENV_MODE to determine how to combine the texture color with the fragment primary color. The result becomes the new fragment primary color. By default, GL_TEXTURE_ENV_MODE is GL_MODULATE, which means that the colors are combined using componentwise multiplication.

To use both OpenGL lighting and texture mapping, you must use GL_MODULATE. OpenGL computes lighting before performing texture mapping, so the fragment primary color already contains lighting and shading effects. Multiplying these colors by the texture colors produces geometry that is both lit and texture mapped.

Unfortunately, GL_MODULATE adversely affects specular highlights. Consider what happens when the fragment primary color is part of a white specular highlight, but the texture color is very dark or black. Multiplying 1.0 (white) by 0.0 (black) produces 0.0, effectively erasing the specular highlight.

OpenGL version 1.4 provides the separate specular feature to improve the appearance of specular highlights on texture mapped surfaces. This feature introduces a new state variable, GL_LIGHT_MODEL_COLOR_CONTROL, which governs whether OpenGL stores specular lighting effects in the primary or secondary color. It also adds a new per-fragment operation after texture mapping, which adds the secondary color to the fragment primary color. In summary, this feature causes OpenGL to apply the specular highlight after texture mapping.

Figure 6-11 shows the output of the SecondaryColor example program when the separate secondary color feature is enabled.

Figure 6-11. A screen shot from the SecondaryColor example program. Source: Dean Randazzo.


The SecondaryColor example allows you to disable the separate secondary color feature by using the pop-up menu. Compare Figure 6-11 with Figure 6-12, which has the separate secondary color feature disabled. When this feature is disabled, dark areas of the texture image mute the specular highlight.

Figure 6-12. A screen shot from the SecondaryColor example program with secondary color disabled. Note the muted specular highlights. Source: Dean Randazzo.


Applications that need to render shiny texture mapped surfaces use the separate secondary color feature to produce acceptable specular highlights. To enable this feature, use glLightModeli() as follows:

 glLightModeli( GL_LIGHT_MODEL_COLOR_CONTROL, GL_SEPARATE_SPECULAR_COLOR ); 


By default, GL_LIGHT_MODEL_COLOR_CONTROL is set to GL_SINGLE_COLOR, which instructs OpenGL to apply specular lighting before texture mapping. This is the behavior of OpenGL before the introduction of separate specular color in version 1.4.

In addition to configuring OpenGL to store specular lighting effects in the secondary color, setting GL_LIGHT_MODEL_COLOR_CONTROL to GL_SEPARATE_SPECULAR_COLOR turns on the per-fragment operation that adds the secondary color to the fragment primary color after texture mapping. This per-fragment operation can be enabled separately, allowing applications to use secondary color for purposes other than specular highlights. Applications rarely do this, however. For more information, see Chapter 9, "Texture Mapping," of OpenGL® Programming Guide.

6.2.5. Environment Maps

Separate secondary color addresses only one problem with specular lighting. As described in Chapter 4, "Lighting," a bigger issue is that OpenGL computes lighting values at each vertex, producing unstable specular highlights on low-resolution geometry. Increasing geometric resolution to improve the appearance of specular highlights reduces performance if your application is geometry limited.

Furthermore, specular highlights produced by traditional graphics hardware are merely an approximation. In reality, specular highlights are reflections of the light source. Shiny surfaces typically reflect more than just a light source; often, they reflect their entire environment (Blinn 1976).

To support environment mapping, OpenGL supports both sphere and cube maps. Cube maps provide superior support for this algorithm and are easier for applications to create than sphere maps. The introduction of cube maps in OpenGL version 1.3 has largely supplanted sphere mapping, which most OpenGL programmers now consider obsolete.

Cube maps are a form of texture mapping that requires six texture images stored in the same texture object. In the classic implementation of environment mapping, each image in the cube map represents a 90-degree field of view of the environmentone image for each major axis direction. At render time, OpenGL uses the s, t, and r texture coordinates as a direction vector to select one of the six cube map faces and then performs texel lookup with derived s and t values in the selected face. For environment mapping, applications enable texture-coordinate generation for s, t, and r with GL_TEXTURE_GEN_MODE set to GL_REFLECTION_MAP. This mode uses the surface normal to produce texture coordinates identical to reflection vectors. As a result, the cube map images appear on rendered geometry as though the surface were highly reflective.

To use cube maps, create a texture object and bind it to GL_TEXTURE_CUBE_MAP, as the following code segment shows:

 GLuint texId; glGenTextures( 1, &_texId ); glBindTexture( GL_TEXTURE_CUBE_MAP, _texId ); 


To complete a cube map texture object, your application needs to store six texture images in the object. All six images must be square and have the same dimensions, and all must have the same internal format. Your code will need to make six calls to glTexImage2D(), each with a different target parameter: GL_TEXTURE_CUBE_MAP_POSITIVE_X, GL_TEXTURE_CUBE_MAP_NEGATIVE_X, GL_TEXTURE_CUBE_MAP_POSITIVE_Y, GL_TEXTURE_CUBE_MAP_NEGATIVE_Y, GL_TEXTURE_CUBE_MAP_POSITIVE_Z, and GL_TEXTURE_CUBE_MAP_NEGATIVE_Z. Note that these target values are sequential C-preprocessor variables, which allow your application to set the six cube map faces with a code loop.

When setting texture-object state for a cube map, use GL_TEXTURE_CUBE_MAP as the target parameter to glTexParameteri(). The following code segment sets wrapping and filter parameters in a cube map texture object:

 glTexParameteri( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S,     GL_CLAMP_TO_EDGE ); glTexParameteri( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T,     GL_CLAMP_TO_EDGE ); glTexParameteri( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R,     GL_CLAMP_TO_EDGE ); glTexParameteri( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER,     GL_LINEAR_MIPMAP_LINEAR ); glTexParameteri( GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER,     GL_LINEAR ); 


Another part of texture-object state is whether texture mapping is enabled. For 2D (not cube map) texture objects, you enable texture mapping with glEnable( GL_TEXTURE_2D ). To enable a cube map texture object, however, you must call glEnable( GL_TEXTURE_CUBE_MAP ).

To enable texture-coordinate generation, first set the active texture unit for the cube map texture with glActiveTexture(). For environment mapping, your application should use GL_REFLECTION_MAP and enable texture-coordinate generation for s, t, and r, as the following code shows:

 glEnable( GL_TEXTURE_GEN_S ); glEnable( GL_TEXTURE_GEN_T ); glEnable( GL_TEXTURE_GEN_R ); glTexGeni( GL_S, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_T, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); glTexGeni( GL_R, GL_TEXTURE_GEN_MODE, GL_REFLECTION_MAP ); 


The results produced by environment mapping are only as good as the quality of the individual cube map faces. To environment-map a computer-generated scene, applications simply render the six faces, typically into the back buffer, and use glCopyTexImage2D() to load the images into the cube map texture object. Other applications use scanned-in photos of actual environments as cube map images. Creating quality images for environment mapping is often more art than science and depends greatly on your application.

The CubeMap example program, available on this book's Web site, demonstrates a common use for cube maps that features straightforward cube map image generation. The example uses cube maps to apply a stable specular highlight to low-resolution geometry. Figure 6-13 and Plate 7 show a screen shot from this example.

Figure 6-13. Screen shot from the CubeMap example program. The geometry lacks sufficient resolution to produce an acceptable specular highlight using OpenGL lighting. Instead, the example uses cube maps to produce the specular highlights.


The CubeMap example program uses the ogld::CubeMap class to manage the cube map texture object and its images. ogld::CubeMap allows an application to load six individual images, but by default, the class creates six images that simulate a specular highlight. To do this, ogld::CubeMap creates six GL_LUMINANCE format images and clears them to zero intensity (black). For the image corresponding to the positive x axis, however, ogld::CubeMap adds a full-intensity (white) circle to the center of the image. As a result, the default images created by ogld::CubeMap create a specular highlight for a light source located at positive x in eye coordinates.

Most applications don't restrict themselves to lights at positive x in eye coordinates, however. Furthermore, applications typically use GL_REFLECTION_MAP texture-coordinate generation, which produces eye-coordinate reflection vectors. To use the default ogld::CubeMap images in this context, the CubeMap example places a transformation in the texture matrix to transform reflection vectors from eye coordinates to world coordinates and, further, to transform them from the positive x axis to the actual light position.

The example code uses transformation T, which is the concatenation of two transformations, M1 and V1:

T = M1V1


M1 is the inverse of a matrix that transforms the positive x axis to the light vector. V1 is the inverse of the view transformation. The CubeMap example program takes a shortcut in this computation. Assuming that the light source is located at infinity makes position irrelevant; only direction is important. When a matrix only transforms from one orthonormal orientation basis to another, its inverse is the same as its transpose, which is much simpler to compute. This shortcut produces acceptable results only if the light isn't too close to the geometry. For an example of computing this transformation and storing it in the texture matrix, see the function setCubeMapTextureMatrix() in CubeMap.cpp.

The CubeMap example uses GL_ADD for the texture environment mode. The vast majority of pixels in the cube map images are black, so GL_ADD doesn't change the incoming fragment color. For the white pixels in the center of the positive x axis image, however, GL_ADD adds full-intensity red, green, and blue to the fragment color. OpenGL clamps the result so that the individual components don't overflow. The result is a white specular highlight.




OpenGL Distilled
OpenGL Distilled
ISBN: 0321336798
EAN: 2147483647
Year: 2007
Pages: 123
Authors: Paul Martz

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net