TRADITIONAL 3D HARDWARE-ACCELERATED LIGHTING MODELS

Before we get into the more esoteric uses of shaders, we'll first take a look at the traditional method of calculating lighting in hardware—a method that you'll find is sufficient for most of your needs.

The traditional approach in real-time computer graphics has been to calculate lighting at a vertex as a sum of the ambient, diffuse, and specular light. In the simplest form (used by OpenGL and Direct3D), the function is simply the sum of these lighting components (clamped to a maximum color value). Thus we have an ambient term and then a sum of all the light from the light sources.

where itotal is the intensity of light (as an rgb value) from the sum of the intensity of the global ambient value and the diffuse and specular components of the light from the light sources. This is called a local lighting model since the only light on a vertex is from a light source, not from other objects. That is, lights are lights, not objects. Objects that are brightly lit don't illuminate or shadow any other objects.

I've included the reflection coefficients for each term, k for completeness since you'll frequently see the lighting equation in this form. The reflection coefficients are in the [0,1] range and are specified as part of the material property. However, they are strictly empirical and since they simply adjust the overall intensity of the material color, the material color values are usually adjusted so the color intensity varies rather than using a reflection coefficient, so we'll ignore them in our actual color calculations.

This is a very simple lighting equation and gives fairly good results. However, it does fail to take into account any gross roughness or anything other than perfect isotropic reflection. That is, the surface is treated as being perfectly smooth and equally reflective in all directions. Thus this equation is really only good at modeling the illumination of objects that don't have any "interesting" surface properties. By this I mean anything other than a smooth surface (like fur or sand) or a surface that doesn't really reflect light uniformly in all directions (like brushed metal, hair, or skin). However, with liberal use of texture maps to add detail, this model has served pretty well and can still be used for a majority of the lighting processing to create a realistic environment in real time. Let's take a look at the individual parts of the traditional lighting pipeline.

Ambient Light

Ambient light is the light that comes from all directions—thus all surfaces are illuminated equally regardless of orientation. However, this is a big hack in traditional lighting calculations since "real" ambient light really comes from the light reflected from the "environment." This would take a long time to calculate and would require ray tracing or the use of radiosity methods, so traditionally, we just say that there's x amount of global ambient light and leave it at that. This makes ambient light a little different from the other lighting components since it doesn't depend on a light source. However, you typically do want ambient light in your scene because having a certain amount of ambient light makes the scene look natural. One large problem with the simplified lighting model is that there is no illumination of an object with reflected light—the calculations required are enormous for a scene of any complexity (every object can potentially reflect some light and provide some illumination for every other object in a scene) and are too time consuming to be considered for real-time graphics.

So, like most things in computer graphics, we take a look at the real world, decide it's too complicated, and fudge up something that kinda works. Thus the ambient light term is the "fudge factor" that accounts for our simple lighting model's lack of an inter-object reflectance term.

where ia is the ambient light intensity, ma is the ambient material color, and sa is the light source ambient color. Typically, the ambient light is some amount of white (i.e., equal rgb values) light, but you can achieve some nice effects using colored ambient light. Though it's very useful in a scene, ambient light doesn't help differentiate objects in a scene since objects rendered with the same value of ambient tend to blend since the resulting color is the same. Figure 3.5 shows a scene with just ambient illumination. You can see that it's difficult to make out details or depth information with just ambient light.

click to expand
Figure 3.5: Ambient light provides illumination, but no surface details.

Ambient lighting is your friend. With it you make your scene seem more realistic than it is. A world without ambient light is one filled with sharp edges, of bright objects surrounded by sharp, dark, harsh shadows. A world with too much ambient light looks washed out and dull. Since the number of actual light sources supported by hardware FFP is limited (typically to eight simultaneous), you'll be better off to apply the lights to add detail to the area that your user is focused on and let ambient light fill in the rest. Before you point out that talking about the hardware limitation of the number of lights has no meaning in a book on shaders, where we do the lighting calculations, I'll point out that eight lights were typically the maximum that the hardware engineers created for their hardware. It was a performance consideration. There's nothing stopping you (except buffer size) from writing a shader that calculates the effects from a hundred simultaneous lights. But I think that you'll find that it runs much too slowly to be used to render your entire scene. But the nice thing about shaders is you can.

Diffuse Light

Diffuse light is the light that is absorbed by a surface and is reflected in all directions. In the traditional model, this is ideal diffuse reflection—good for rough surfaces where the reflected intensity is constant across the surface and is independent of viewpoint but depends only upon the direction of the light source to the surface. This means that regardless of the direction from which you view an object with a stationary diffuse light source on it, the brightness of any point on the surface will remain the same. Thus, unlike ambient light, the intensity of diffuse light is directional and is a function of the angle of the incoming light and the surface. This type of shading is called Lambertian shading after Lambert's cosine law, which states that the intensity of the light reflected from an ideal diffuse surface is proportional to the cosine of the direction of the light to the vertex normal.

Since we're dealing with vertices here and not surfaces, each vertex has a normal associated with it. You might hear talk of per-vertex normals vs. perpolygon normals. The difference being that per polygon has one normal shared for all vertices in a polygon, whereas per vertex has a normal for each vertex. OpenGL has the ability to specify per-polygon normals, and Direct3D does not. Since vertex shaders can't share information between vertices (unless you explicitly copy the data yourself), we'll focus on per-vertex lighting. Figure 3.6 shows the intensity of reflected light as a function of the angle between the vertex normal and the light direction.

click to expand
Figure 3.6: Diffuse light decreases as the angle between the light vector and the surface normal increases.

The equation for calculating diffuse lighting is

which is similar to the ambient light equation, except that the diffuse light term is now multiplied by the dot product of the unit normal of the vertex and the unit direction vector to the light from the vertex (not the direction from the light). Note that the md value is a color vector, so there are rgb or rgba values that will get modulated.

Since , where θ is the angle between vectors, when the angle between them is zero, cos(θ) is 1 and the diffuse light is at its maximum. When the angle is 90°, cos(θ) is zero and the diffuse light is zero. One calculation advantage is that when the cos(θ) value is negative, this means that the light isn't illuminating the vertex at all. However, since you (probably!) don't want the light illuminating sides that it physically can't shine on, you want to clamp the contribution of the diffuse light to contribute only when cos(θ) is positive. Thus the equation in practice looks more like

where we've clamped the diffuse value to only positive values. Figure 3.7 was rendered with just diffuse lighting. Notice how you can tell a lot more detail about the objects and pick up distance cues from the shading.

click to expand
Figure 3.7: Diffuse shading brings out some surface details.

The problem with just diffuse lighting is that it's independent of the viewer's direction. That is, it's strictly a function of the surface normal and the light direction. Thus as we change the viewing angle to a vertex, the vertex's diffuse light value never changes. You have to rotate the object (change the normal direction) or move the light (change the light direction) to get a change in the diffuse lighting of the object.

However, when we combine the ambient and diffuse, as in Figure 3.8, we can see that the two types of light give a much more realistic representation than either does alone. This combination of ambient and diffuse is used for a surprisingly large number of items in rendered scenes since when combined with texture maps to give detail to a surface you get a very convincing shading effect.

click to expand
Figure 3.8: When diffuse and ambient terms are combined, you get more detail and a more natural-looking scene. The final color is the combination of the ambient and diffuse colors.

Specular Light

Ambient light is the light that comes from the environment (i.e., it's directionless); diffuse light is the light from a light source that is reflected by a surface evenly in all directions (i.e., it's independent of the viewer's position). Specular light is the light from a light source that is reflected by a surface and is reflected in such a manner that it's both a function of the light's vector and the viewer's direction. While ambient light gives the object an illuminated matte surface, specular light is what gives the highlights to an object. These highlights are greatest when the viewer is looking directly along the reflection angle from the surface. This is illustrated in Figure 3.9.

click to expand
Figure 3.9: Specular light's intensity follows the reflection vector.

Most discussions of lighting (including this one) start with Phong's lighting equation (which is not the same as Phong's shading equation). In order to start discussing specular lighting, let's look at a diagram of the various vectors that are used in a lighting equation. We have a light source, some point the light is shining on, and a viewpoint. The light direction (from the point to the light) is vector l, the reflection vector of the light vector (as if the surface were a mirror) is r, the direction to the viewpoint from the point is vector v. The point's normal is n.

Phong's Specular Light Equation

Warnock [WARNOCK 1969] and Romney [ROMNEY 1969] were the first to try to simulate highlights using a cosn(θ) term. But it wasn't until Phong Bui-Tong [BUI 1998] reformulated this into a more general model that formalized the power value as a measure of surface roughness that we approach the terms used today for specular highlights. Phong's equation for specular lighting is

click to expand
Figure 3.10: The relationship between the normal n, the light vector v, the view direction v, and the reflection vector r.

It basically says that the more the view direction, v, is aligned with the reflection direction, r, the brighter the specular light will be. The big difference is the introduction of the ms term, which is a power term that attempts to approximate the distribution of specular light reflection. The ms term is typically called the "shininess" value. The larger the ms value, the "tighter" (but not brighter) the specular highlights will be. This can be seen in the Figure 3.11, which shows values of for values of m ranging from 1 to 128. As you can see, the specular highlights get narrower for higher values, but they don't get any brighter.

click to expand
Figure 3.11: Phong's specular term for various values of the "shininess" term. Note that the values never get above 1.

Now, as you can see, this requires some calculations since we can't know r beforehand since it's the v vector reflected around the point's normal. To calculate r we can use the following equation:[3]

If 1 and n are normalized, then the resulting r is normalized and the equation can be simplified.

And just as we did for diffuse lighting, if the dot product is negative, then the term is ignored.

Figure 3.12 shows the scene with just specular lighting. As you can see, we get an impression of a very shiny surface.

click to expand
Figure 3.12: A specular term just shows the highlights.

When we add the ambient, diffuse, and specular terms together, we get Figure 3.15. The three terms all act in concert to give us a fairly good imitation of a nice smooth surface that can have a varying degree of shininess to it.

You may have noticed that computing the reflection vector took a fair amount of effort. In the early days of computer graphics, there was a concerted effort to reduce anything that took a lot of computation, and the reflection vector of Phong's equation was one such item.

Blinn's Simplification: OpenGL and DirectX Lighting

Now it's computationally expensive to calculate specular lighting using Phong's equation since computing the reflection vector is expensive. Blinn [BLINN 1977] suggested, instead of using the reflection and view vectors, that we create a "half" vector that lies between the light and view vectors. This is shown as the h vector in Figure 3.13. Just as Phong's equation maximizes when the reflection vector is coincident with the view vector (thus the viewer is looking directly along the reflection vector), so does Blinn's. When the half vector is coincident with the normal vector, then the angle between the view vector and the normal vector is the same as between the light vector and the normal vector. Blinn's version of Phong's equation is:

click to expand
Figure 3.13: The half-angle vector is an averaging of the light and view vectors.

where the half vector is defined as

The advantage is that no reflection vector is needed; instead, we can use values that are readily available, namely, the view and light vectors. Note that both OpenGL and the DirectX FFP use Blinn's equation for specular light.

Besides a speed advantage, there are some other effects to note between Phong's specular equation and Blinn's.

  • If you multiply Blinn's exponent by 4, you approximate the results of Phong's equation.

  • Thus if there's an upper limit on the value of the exponent, Phong's equation can produce sharper highlights.

  • For l • v angles greater than 45° (i.e., when the light is behind an object and you're looking at an edge), the highlights are longer along the edge direction for Phong's equation.

  • Blinn's equation produces results closer to those seen in nature.

For an in-depth discussion of the differences between the two equations, there's an excellent discussion in [FISHER 1994]. Figure 3.14 shows the difference between Phong lighting and Blinn—Phong lighting.

click to expand
Figure 3.14: Blinn-Phong specular on the left, Phong specular on the right.

The Lighting Equation

So now that we've computed the various light contributions to our final color value, we can add them up to get the final color value. Note that the final color values will have to be made to fit in the [0,1] range for the final rgb values.

Our final scene with ambient, diffuse, and (Blinn's) specular light contributions (with one white light above and to the left of the viewer) looks like Figure 3.15.

click to expand
Figure 3.15: A combination of ambient, diffuse, and specular illumination.

It may be surprising to discover that there's more than one way to calculate the shading of an object, but that's because the model is empirical, and there's no correct way, just different ways that all have tradeoffs. Until now though, the only lighting equation you've been able to use has been the one we just formulated.

Most of the interesting work in computer graphics is tweaking that equation, or in some cases, throwing it out altogether and coming up with something new.

The next sections will discuss some refinements and alternative ways of calculating the various coefficients of the lighting equation. We hope you'll get some ideas that you'll be able to use to create your own unique shaders.

Light Attenuation

Light in the real world loses its intensity as the inverse square of the distance from the light source to the surface being illuminated. However, when put into practice, this seemed to drop off the light intensity in too abrupt a manner and then not to vary too much after the light was far away. An empirical model was developed that seems to give satisfactory results. This is the attenuation model that's used in OpenGL and DirectX. The fatten factor is the attenuation factor. The distance d between the light and the vertex is always positive. The attenuation factor is calculated by the following equation:

where the kc, k1, and kq parameters are the constant, linear, and quadratic attenuation constants, respectively. To get the "real" attenuation factor, you can set kq to one and the others to zero.

The attenuation factor is multiplied by the light diffuse and specular values. Typically, each light will have a set of these parameters for itself. The lighting equation with the attenuation factor looks like this.

Figure 3.16 shows a sample of what attenuation looks like. This image is the same as the one shown in Figure 3.15, but with light attenuation added.

click to expand
Figure 3.16: A scene with light attenuation. The white sphere is the light position.

Schlick's Simplification for the Specular Exponential Term

Real-time graphics programmers are always looking for simplifications. You've probably gathered that there's no such thing as the "correct" lighting equation, just a series of hacks to make things look right with as little computational effort as possible. Schlick [SCHLICK 1994] suggested a replacement for the exponential term since that's a fairly expensive operation. If we define part of our specular light term as follows:

where S is either the Phong or Blinn-Phong flavor of the specular lighting equation, then Schlick's simplification is to replace the preceding part of the specular equation with

which eliminates the need for an exponential term. At first glance, a plot of Schlick's function looks very similar to the exponential equation (Figure 3.17).

click to expand
Figure 3.17: Schlick's term for specular looks very much like the more expensive Phong term.

If we plot both equations in the same graph (Figure 3.18), we can see some differences and evaluate just how well Schlick's simplification works. The blue values are Schlick's, and the red are the exponential plot. As the view and light angles get closer (i.e., get closer to zero on the x axis), we can see that the values of the curves are quite close. (For a value of zero, they overlap.) As the angles approach a grazing angle, we can see that the approximation gets worse. This would mean that when there is little influence from a specular light, Schlick's equation would be slightly less sharp for the highlight.

click to expand
Figure 3.18: Schlick's vs. Phong's specular terms.

You might notice the green line in Figure 3.18. Unlike the limit of a value of 128 for the exponential imposed in both OpenGL and DirectX FFP, we can easily make our values in the approximation any value we want. The green line is a value of 1024 in Schlick's equation. You may be thinking that we can make a very sharp specular highlight using Schlick's approximation with very large values—sharper than is possible using the exponential term. Unfortunately, we can't since you really need impractically large values (say, around 100 million) to boost it significantly over the exponential value for 128. But that's just the kind of thinking that's going to get your creative juices flowing when writing your own shaders! If the traditional way doesn't work, figure out something that will.

Oren—Nayar Diffuse Reflection

Though there's been a lot of research on specular reflection models, there's been less research on diffuse reflection models. One of the problems of the standard Lambertian model is that it considers the surface as a smooth diffuse surface. Surfaces that are really rough, like sandpaper, exhibit much more of a backscattering effect, particularly when the light source and the view direction are in the same direction.

The classic example of this is a full moon. If you look at the picture of the moon shown in Figure 3.19, it's pretty obvious that this doesn't follow the Lambertian distribution—if it did, the edges of the moon would be in near darkness. In fact, the edges look as bright as the center of the moon. This is because the moon's surface is rough—the surface is made of a jumble of dust and rock with diffuse reflecting surfaces at all angles—thus the quantity of reflecting surfaces is uniform no matter the orientation of the surface; hence no matter the orientation of the surface to the viewer, the amount of light reflecting off the surface is nearly the same.

click to expand
Figure 3.19: The full moon is an good example of something that doesn't show Lambertian diffuse shading.

The effect we're looking at is called backscattering. Backscattering is when a rough surface bounces around a light ray and then reflects the ray in the direction the light originally came from. Note that there is a similar but different effect called retroreflection. Retroreflection is the effect of reflecting light toward the direction from which it came, no matter the orientation of the surface. This is the same effect that we see on bicycle reflectors. However, this is due to the design of the surface features (made up of v-shaped or spherical reflectors) rather than a scattering effect.

In a similar manner, when the light direction is closer to the view direction, we get the effect of forward scattering. Forward scattering is just backscattering from a different direction. In this case, instead of near uniform illumination though, we get near uniform loss of diffuse lighting. You can get the same effects here on Earth. Figures 3.20 and 3.21 show the same surfaces demonstrating backscattering and forward scattering. Both the dirt field in Figure 3.20 and the soybean field in Figure 3.21 can be considered rough diffuse reflecting surfaces.

click to expand
Figure 3.20: The same dirt field showing wildly differing reflection properties.

click to expand
Figure 3.21: A soybean field showing differing reflection properties.

Notice how the backscattering image shows a near uniform diffuse illumination, whereas the forward scattering image shows a uniform dull diffuse illumination. Also note that you can see specular highlights and more color variation because of the shadows due to the rough surface, whereas the backscattered image washes out the detail.

In an effort to better model rough surfaces, Oren and Nayar [OREN 1992] came up with a generalized version of a Lambertian diffuse shading model that tries to account for the roughness of the surface. They applied the Torrance—Sparrow model for rough surfaces with isotropic roughness and provided parameters to account for the various surface structures found in the Torrance—Sparrow model. By comparing their model with actual data, they simplified their model to the terms that had the most significant impact. The Oren—Nayar diffuse shading model looks like this.

where

Now this may look daunting, but it can be simplified to something we can appreciate if we replace the original notation with the notation we've already been using. ρ/π is a surface reflectivity property, which we can replace with our surface diffuse color. E0 is a light input energy term, which we can replace with our light diffuse color. And the θi term is just our familiar angle between the vertex normal and the light direction. Making these exchanges gives us

which looks a lot more like the equations we've used. There are still some parameters to explain.

  • σ is the surface roughness parameter. It's the standard deviation in radians of the angle of distribution of the microfacets in the surface roughness model. The larger the value, the rougher the surface.

  • θr is the angle between the vertex normal and the view direction.

  • φr φi is the circular angle (about the vertex normal) between the light vector and the view vector.

  • α is max(θi, θr).

  • β is min (θi, θr).

Note that if the roughness value is zero, the model is the same as the Lambertian diffuse model. Oren and Nayar also note that you can replace the value 0.33 in coefficient A with 0.57 to better account for surface interreflection.

[3]An excellent explanation of how to compute the reflection vector can be found in [RTR].



Real-Time Shader Programming(c) Covering Directx 9. 0
Real-Time Shader Programming (The Morgan Kaufmann Series in Computer Graphics)
ISBN: 1558608532
EAN: 2147483647
Year: 2005
Pages: 104
Authors: Ron Fosner

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net