Game development is an evolving science, and thus more sophisticated and efficient methods appear constantly. One clear sign of this evolution is the shift from explicit to implicit representations in many areas, such as AI, for example. In the old days, enemies were built directly into the game code using state machines or nested if-else structures. It was a clearly explicit, closed definition what you saw was what you got. Now, compare this explicit AI definition with a modern scripting system, where the AI is left as an open API so content integrators can work with it and tweak it separately from the main source files. Here the representation is implicit. Content integrators can bind any AI procedure (provided it follows some coding conventions) to the core game and use the result as a seamless whole. All areas of game programming have followed a similar approach, whether it's Digital Signal Processing (DSP) filters for audio or, as the subject of this chapter states, implementing graphics routines using shaders.
A shader is just a routine that implements part of the graphics processing tasks as a filter. Shaders emerged in the late 1980s as a fundamental component in Renderman, the seminal renderer/scene description methodology by Pixar. Today, most graphics cards support them on hardware, so boards can run shaders internally and thus increase the expressive potential found in fixed-function pipelines. Simply put, a shader implements an effect by means of a small program. This global philosophy is called the procedural paradigm: substituting part of the explicit data by implicit routines. It has surfaced in several areas of game development. Considering the importance of the presentation layer, it is not surprising that the graphics pipeline is where these kinds of methodologies can best be used.