Chapter 1: Introduction

Chapter 1: Introduction


This is the true joy in life, being used for a purpose recognized by yourself as a mighty one. Being a force of nature instead of a feverish, selfish little clod of ailments and grievances complaining that the world will not devote itself to making you happy. I am of the opinion that my life belongs to the whole community and as I live it is my privilege—my privilege—to do for it whatever I can. I want to be thoroughly used up when I die, for the harder I work the more I love. I rejoice in life for its own sake. Life is no brief candle to me; it is a sort of splendid torch which I've got a hold of for the moment and I want to make it burn as brightly as possible before handing it on to future generations.

--George Bernard Shaw

Shader: A custom shading and lighting procedure that allows the motivated artist or programmer to specify the rendering of a vertex or pixel.

"Shader" comes from Pixar's RenderMan, which is a program that takes an entire description of a scene, from camera positions through object geometry, to a final rendering. RenderMan was introduced in 1989, but it wasn't really until the 1995 release of the movie Toy Story that the general public was introduced to the power of RenderMan. About this same time, there was a revolution taking place on the graphics boards of PCs; the boards were evolving at a faster and faster clip, and the features that were showing up on "commodity" boards were rivaling those previously found only on workstations.

As Pixar continued to make hit after hit using RenderMan, soon other movie studios joined in. Meanwhile, the PC games community was finding new uses for the powerful graphics cards with which new PCs were now equipped. Light maps in particular were soon finding their way into games, followed by bump maps and procedural vertex generation. In fact, it was the games community that soon started clamoring for more features, and in order to differentiate themselves from the pack, some graphics card vendors heeded this call and soon started layering more and more features onto their cards. This had a snow-ball effect of creating a larger and larger installed base of fairly sophisticated PCs that had a good selection of graphics features.

The latter part of the century also saw the graphics API war—OpenGL (the "established" standard) vs. Direct3D (the "upstart" API from Microsoft). While two fractious camps sided off leaving many agnostics in the middle, it soon became clear that the committee-based architecture board that governs OpenGL's evolution was too mired in the plodding workstation world, whereas the nimble and ferocious commodity PC graphics board industry was continually trying to outdo each other with every release, which was better suited to Direct3D's sometimes annoying yearly release cycle.

With only a few graphics boards on the market, it was easy to program to OpenGL and ignore the many growing pains of Direct3D, but 1999 saw some accelerated graphics hardware dedicated only to games start to show up, and by 2001 that was the norm. By this time, the games development community had discovered the traditional graphics researchers (the "Siggraph crowd") and vice-versa. The game developers discovered that many of the problems they faced had already been solved in the 1970s and 1980s as workstations were evolving and the graphics researchers liked the features that could be found on $100 video cards.

Thus we come into the era of the shading language. Both OpenGL and Direct3D use the "traditional" lighting equations, and these have served quite well for many years. But game programmers are continually striving to create the most gorgeous "eye candy" that they can (ignoring the actual game play for the moment). The traditional lighting equations are fine for common things, but if you are trying to create some stunning visual effects, Goraud-shaded objects are just not going to cut it. And here we turn full circle and return to Toy Story. For a number of years, it was bandied about in the graphics community that we were approaching Toy Story levels of real-time animation. Now Toy Story was emphatically not rendered in real time. Many months running on 117 Sun Sparc Stations grinding out over 130,000 frames of animation is what rendered Toy Story (plus 1300 unique shaders!). But both the computer's CPU and the graphics card's GPU (geometry processing unit) were getting continually more powerful. GPUs were becoming capable of offloading may of the things traditionally done by the CPU.

The first thing that GPUs took over from the CPU was some of the simple, repetitive math that the CPU used to have to do, the object transformation and lighting, or TnL [Fosner 2000]. This was a simple change, and in many cases, applications that used the graphics API for transformation and lighting saw a speedup with no code changes. On the other hand, there were many games—or games programmers who decided that they could do a better job than the API and did it themselves—and these games got no benefit from better hardware. But by this time, the writing was on the wall. No matter how good a programmer you were, your software implementation would never run as fast as the generic all-purpose hardware implementation.

However, there was one mantra that graphics hardware developers were continually hearing from graphics software developers, and that was they wanted a bigger choice in deciding how pixels got rendered. Of course, it still had to be hardware accelerated, but they wanted a chance to put their own interpretation on the rendered object. In fact, they wanted something like RenderMan shaders.