Before we delve into specific texture mapping techniques, we must start by classifying textures and their uses. Textures is a very broad topic, and just using the term alone can result in misunderstandings. Textures can be classified as explicit or procedural. An explicit texture map consists of a regular bitmap, which we can create with a paint program, scan from a picture, and so on. It is very easy to apply and has almost no CPU cost. We just have to provide the texture coordinates, and the graphics subsystem will take care of the texturing process. On the other hand, a procedural texture is the output of a computer program that computes the texture map. Marble, fire, and many other materials can be decomposed in some primitive equations and functions, which can be implemented as a program. Procedural textures have several benefits over explicit textures. They are resolution independent because we can zoom into the texture and compute the details. Because they are defined by mathematical functions that include some randomness, they are usually not as repetitive as explicit maps. On the other hand, they have a significant performance hit. Whether they are executed on the CPU or on a dedicated shader platform, procedural maps will take up some system resources. This chapter will focus almost exclusively on explicit textures, whereas procedural maps are covered in Chapter 21, "Procedural Techniques." Another interesting classification can be established between static and dynamic texture maps. The difference between them is that dynamic maps are recomputed in real time, whereas static maps are just created at boot time. Both explicit and procedural textures can be static or dynamic. The most common type of texture map is both static and explicit. Games like Quake used a sequence of looping fire bitmaps to create torches, which is a good example of a dynamic and explicit technique. Marble and procedural fire would be adequate examples of static and dynamic procedural maps, respectively. Dynamic, explicit textures are frequently encoded by storing all the frames in a single, large texture map, and somehow computing the texture coordinates automatically from the current timer value. So, if the texture holds NxN frames, and the speed (in frames per second) at which we want to animate the map is S, the following pseudocode computes texturing coordinates: current_frame=(S/current_time) % (N*N) row=current_frame/N column=current_frame % N u1=(1/N)*row u2=(1/N)*(row+1) v1=(1/N)*column v2=(1/N)*(column+1) In the preceding code, we are cycling through the texture map, using subimages as the actual texture map. Notice that I assume each image occupies a square zone. Texture MappingWhichever the type, textures can be 1D, 2D, and 3D data sets. 2D maps are the most widely used because they can be represented with bitmaps. 3D textures are also called volume textures. They appear in fields like surgery to display 3D medical images and have slowly appeared on game-oriented hardware. They take up huge amounts of memory, and thus their use must be very limited. A 256x256x256 3D image with each texel representing a 256-color value (such as a grayscale image) can take up as much as 16MB. 1D maps have become increasingly popular over time as a way to implement a color translation table. The texture coordinate (in the range 0..1) is used as an index to a 1D table, where different colors are stored. A good example of this technique is cel shading, which produces quite convincing cartoonlike renderings. Cel shading is discussed in Chapter 17, "Shading." Now, let's go back to our classic 2D texture maps. Because objects are inherently three dimensional, we need a way to precisely specify how the texture map should stretch and wrap the object, so we can place it. This process is called texture mapping. In the 2D case, texture mapping is a function that goes from X,Y,Z to U,V, which are the mapping coordinates. This way it defines the correspondence between vertices on the geometry and texels in the map. For example, the following function is a proper texture mapping function: U = X + Z V = Y However, things are rarely this simple, and more involved functions must be used. Following is a survey of classic mapping functions. XYZ mappingThis function is used for 3D textures, especially procedural textures. It directly maps space to texture coordinates, possibly with translations, rotations, and scalings added on top. The general equation is as follows: U = X V = Y W = Z A scaled function would look something like this: U = X*sx V = Y*sy W = Z*sz Cylindrical MappingAnother classic texture mapping function involves using a cylinder as the projector. Imagine a cylinder along the Y axis, wrapping the object with the material. That cylinder would be defined by the following parametric equations: X = r cos(u* 2 pi) Y = u * h Z = r sin(u* 2 pi) where r and h would determine the cylinder's radii and height, respectively. Now, we can invert the preceding equation to solve for U and V, which are the mapping coordinates. In that case, the resulting equation would be: V = arctan (X/Z) U = Y/h Notice that this mapping is computed with the cylinder along the Y axis. Euclidean transforms could be applied to ensure that the cylinder lies in an arbitrary axis. Spherical MappingMapping onto a sphere is somewhat similar to using a cylinder. All we have to do is use the parametric equations of the sphere to construct inverse equations that represent the texture mapping. Assume a sphere defined by the following: x = r sin(v pi) cos(u 2 pi) y = r sin(v pi) sin(u 2 pi z = r cos(v pi) Notice how we have used U and V (in the texture mapping sense) as the parameters for the sphere; both are in the range 0..1. Then, reversing the equation after half a page of algebra you get: u = ( arccos(x/(r sin(v pi))) ) / (2 pi) v = arccos(z/r) / pi The preceding formula assumes that arccos returns a value from 0 to 2Pi. Most software implementations return values in the range 0..Pi, so some changes to the code are needed for correct rendering. Here is the source code for a spherical texture mapping function: #define PI 3.141592654 #define TWOPI 6.283185308 void SphereMap(x,y,z,radius,u,v) double x,y,z,r,*u,*v; { *v = acos(z/radius) / PI; if (y >= 0) *u = acos(x/(radius * sin(PI*(*v)))) / TWOPI; else *u = (PI + acos(x/(radius * sin(PI*(*v))))) / TWOPI; } Notice how spherical mapping causes distortion to appear near the two poles of the sphere. To prevent this phenomenon called pinching, we need to preprocess the texture, either by using a commercial image processing software package or by implementing the process ourselves. Simply put, we need to convert the texture from polar to rectangular coordinates. Tools like Photoshop and Paint Shop Pro do this type of processing. Texture Mapping a TriangleThe methods discussed in the previous sections are used for full objects that more or less conform to the mapping shapes, such as a planet or a banana. Most objects, however, have complex shapes that cannot be easily assigned to a sphere or cylinder. In these cases, we will need a general method that offers triangle-level control over the texturing process. We will need to assign texture coordinates manually, so the maps wrap the object precisely and accurately. Given a triangle p1, p2, p3 with texture coordinates (u1,v1), (u2,v2), and (u3, v3), deciding the texture coordinates for a point p (in world coordinates) inside the triangle is just a bilinear interpolation. We start by building two vectors from p1 to p2 and from p1 to p3, and construct the point p as the linear combination of these two vectors. Assuming the initial vectors were normalized, the components of the linear combination are the blending coefficients that, once applied to the per-vertex texture coordinates, will give us the U,V coordinates for the point p. See the example in Figure 18.1. Figure 18.1. Texture mapping triangles.On the other hand, rasterizers usually texture pixels in 2D after projection, but not full 3D points. Thus, a different mapping approach is followed. We start with the triangle defined not by 3D points, but by 2D, projected coordinates. Then, the triangle is painted one scanline at a time, drawing the horizontal line of pixels and texturing them along the way. Clearly, we need some kind of depth information to distinguish, from a texturing standpoint, two triangles that look the same onscreen but are not the same in 3D (see Figure 18.2). Thus, the 2D triangle coordinates are augmented with additional data that is then used to perform perspective-correct texture mapping, taking depth into consideration. Because these computations are performed on the hardware, they are beyond the duties of a game programmer these days. Figure 18.2. Two distinct triangles that share the same projection. |