Nonphotorealistic Rendering

Throughout this chapter, we have tried to model the subtle interactions of light with surfaces and materials, and convey a sense of realism. We will now change our mind-set completely. Nonphotorealistic rendering (NPR) tries to create intentionally unrealistic images. Cartoons, oil paintings, and line drawings are simulated using some clever texturing tricks, so a unique, artistic look is achieved. These techniques began as offline processes, but the advent of powerful 3D accelerators and faster algorithms have recently opened the doors to real-time games with NPR looks.

NPR techniques must be explained one by one because there is no general principle. Some of them are postprocesses to the rendered frame, others are part of a modified rendering algorithm, and so on. Here I will provide some samples for different effects to explore the usual strategies. We will explore painterly rendering, which tries to simulate brush strokes; sketchy rendering, which gives a pencil sketch look; and cartoon rendering, which mimics the style of a comic book.

Pencil Rendering

Simulating a pencil drawing can be performed at rendering time by using a two-pass approach. The first pass renders full objects; the second sketches them with pencil-drawn edges. To reach good results, it is very important to acquire good source textures, which we will use to cover the walls. One such texture can be seen in Figure 17.11, along with a frame of Quake rendered using sketchy rendering. The picture belongs to the NPR Quake project.

Figure 17.11. Style renderings from the NPR Quake project. Top left: pencil. Top right: ink. Bottom: cartoon.

graphics/17fig11.gif

Sketch rendering uses two techniques to convey the feeling of pencil-drawn graphics. First, objects are silhouetted, making sure strokes are not very well defined. Second, surfaces are painted with pencil-drawn textures, which complete the nonrealistic look.

Outlined Drawing

Computing the outline of an object is a somewhat involved process, which can be solved in a variety of ways. If you have edge-triangle adjacency information, you can compute the silhouette of the object by selecting those edges whose two support triangles have different orientations with regard to the viewer. If one of the two support triangles faces the viewer, and the other does not, clearly that edge must be part of the silhouette. This simple algorithm runs in O(number of edges) time. But many games do not store triangle-edge relationships. Thus, simpler algorithms are needed.

A second method uses a two-pass rendering scheme to paint the outline. The whole object is painted with front-facing triangles eliminated with culling. Thus, only back-facing primitives are painted. At this step, we render primitives as thick lines, the thickness depending on the look we want to achieve.

Then, we paint the object once again, culling back-facing triangles away. We use triangles, which will overlay the lines painted in step one, making them disappear. But lines painted in the silhouette will not be deleted because they will extend outside the boundary of the object due to their thickness. Here is the source code in OpenGL for this effect:

 // First pass: render the geometry glPolygonMode (GL_BACK, GL_FILL);   // we want to fill with geometry glCullFace(GL_BACK);   // don't draw back-facing RenderTriangles();   // render (1st pass) // Second pass: silhouette rendering glPolygonMode(GL_BACK,GL_LINE);   // Draw backfacing as wireframe glLineWidth(5);   // Set the line width glCullFace(GL_FRONT);   // Don't Draw Any Front-Facing Polygons glDepthFunc(GL_LEQUAL);   // Change The Depth Mode glDisable(GL_TEXTURE_2D); RenderTriangles();   // render (2nd pass) 

This method has the downside of painting the object twice in order to get accurate silhouettes. Although many other methods exist, all of them have an incurred penalty cost. As an example, take the method shown by Jason Mitchell et al. at SIGGRAPH 2002. In this paper, the scene is rendered separately to two different render targets: one holds RGB data, whereas the second will hold depth values. Then, an edge detection filter is applied in order to highlight the edges. Computing silhouettes always involves a bit more work, so triangle counts and detail levels should be watched in order to achieve reasonable real-time performance.

Stroked Outlines

Artists often draw outlines not as a single, continuous stroke, but as a series of nonperfect strokes, which, when combined, create the illusion of shape. Even the outlines we have discussed in the previous section are a straight, constant width, so the sense of realism is not very good. We can enhance them by simulating these kinds of stroked outlines.

To do so, all we need to do is follow a three-step algorithm:

 Detect the silhouette For each edge in the silhouette       Project it to 2D space       Generate N sub-strokes       whose starting and ending point are along the base stroke       ...plus some random jitter       Render these End for 

As for the silhouette detection code, we can follow two alternative approaches again. We can compute it if we have the edge adjacency information. If we don't, we can simply project all edges, painting strokes for all of them. Then, in a second pass, we paint the object with a Z-buffer offset that pushes it back slightly (enough so front-facing polygons are behind the strokes in terms of Z-value). By doing so, the strokes of the back-facing polygons will be covered by the object itself, much like in the outline algorithm prior to this one.

Cel Shading

Cel shading implements very simple coloring patterns with two or three shades per color only: one for highlighting, one for regular use, and one for shadows. Painting the triangles in flat color would do quite a good job, but the most popular technique is to use a 1D texture map with coordinates derived from lambert illumination values computed per vertex. First, you generate a 1D texture map, which is just a line. The line should be divided into three discrete portions: one with a lighter shade, one with a midtone, and one for highlights. Because we will be using the same texture map for all the rendering and implement colors as per-vertex modulations, the map should be grayscale.

Second, at rendering time, we need to calculate texture coordinates for each vertex. The process is relatively straightforward. Using the illumination information, you compute the Lambertian diffuse lighting coefficient. Remember that this coefficient comes from the equation:

 Id= N*L 

where N is the per-vertex normal and L is the vector from the vertex to the light source. If there is more than one light source, a summation is performed and divided by the number of lights.

In a realistic rendering core, we would use this diffuse intensity to modulate vertex colors. But in cel shading this value is needed to index the 1D texture map. The result is that well-lit areas get the lighter shade, midtones are attached to medium-lit zones, and so on. The main difference with regular texture mapping and lighting is that in cel shading you can see a line that divides the highlights, midtones, and shadows. There is no gradation of colors. Remember that for this technique to work, all vertices must be assigned per-vertex colors, which, by modulating the 1D texture, generate the real mesh colors. So, artists will need to vertex-colorize their meshes before putting them in the rendering engine.

Notice how cel shading uses only one very small texture map. There is no texture swapping, and bus usage is reduced. This usually means you can create higher resolution meshes, which is handy because cartoonlike characters tend to have a high-resolution, curved geometry look.

Painterly Rendering

Another NPR technique is painterly rendering, or directly trying to mimic the look of different painting styles in real time. These techniques are well explored in offline packages such as Photoshop and Fractal Design Painter. In both cases, approaches involve some sort of particle system that runs the stroke simulation. Each stroke is a particle with a life cycle and movement rules, such as follow the color patterns, paint edges, and so on. Generally speaking, all systems start with either a rendered frame (where each particle is just a "color space navigator") or pure geometry. Then, particles are assigned predefined texture maps, which help convey the style of painting we want to mimic.

The downside to all these approaches is that the number of particles and complexity of the simulation makes them hard to port to a real-time environment with conventional techniques. Most implementations do not move beyond 2 3 frames per second, if at all, and require particle numbers in at least the thousands. The main hope here resides in shader-based approaches, which take advantage of vertex and fragment shader capabilities, and are able to offload these computations from the CPU. The same way regular particle systems can be easily implemented into a shader, painterly rendering algorithms will definitely follow their lead.



Core Techniques and Algorithms in Game Programming2003
Core Techniques and Algorithms in Game Programming2003
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 261

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net