With the basic graphics engine now up and running you can focus on shader development for a while. As mentioned in the last chapter everything that is rendered in XNA is done with the help of shaders, even if you don’t interact with shaders directly when using the SpriteBatch class or the SimpleEffect class to simulate fixed function behavior. This chapter goes through the whole process of writing a shader from the ground up, and then you learn all about vertex buffers, the vertex shader process, and how pixel shaders finally bring the pixels on the screen. If you are already a shader expert you probably want to skip this chapter or just quickly fly over it. But because the next chapters all rely on a fundamental knowledge of shaders, I want to make sure everyone is on the same level.
Because shaders are so very important in XNA you should really try to follow all steps in this chapter if you don’t have firm shader knowledge yet. The chapter is designed in a way to let you follow every single step by yourself, and by the end of the chapter you will have written your first shader and you should know everything from start to finish how it is done. Then you will be ready for the next chapters and you will be able to write your own shaders easily.
Before starting with cool normal mapping effects or post screen shaders in the next chapters you have to learn the basics first. Shaders are not just used for high-quality effects, but they also replace very simple rendering processes that were used in the days of the fixed function pipeline. After learning a little bit about the history of shaders and graphic card developments you will go through the process of creating a simple shader just to render an example 3D model with per-pixel specular lighting and then import the shader into your graphic engine. The main tool you are going to use to create shaders and test them out before you even start putting more code in your engine is FX Composer from Nvidia, which is a great tool to quickly create your own shaders.
Fixed function means that the GPU developers have programmed a fixed way to transform vertices (in case the card is T&L able, which means it can transform and light vertices) and output pixels in a pre-programmed way directly on the GPU hardware. It is not possible to change this behavior; you can only enable and disable certain features. For early games this was quite nice because the GPU hardware did a lot of work and kept the CPU free for other tasks, and the GPU was a lot faster rendering 3D data than the CPU because the GPU is heavily optimized just for rendering polygons. If you hear of “Direct3D 7.0 capable graphic cards,” this is exactly what they could do; earlier cards had a similar behavior, but in Direct3D 5.0 and before it was really hard to program directly on the hardware level of the GPU with execute buffers and things like that. Direct3D 7.0 simplified the rendering process when the Nvidia GeForce series of graphic cards were available around 1999. It was now possible to render polygons with multiple textures at once, use texture compression to fit more textures in video memory, and use vertex buffers directly in video memory for improved performance.
Even after these great advancements in Direct3D 7.0 the games did not look much different, they just ran faster, had maybe more polygons, and made use of texture compression and multi-texture effects to show more textures and mix them together a little. The real revolution came with DirectX 8 and DirectX 9. DirectX 8 did finally remove the DirectDraw component and introduced vertex shaders, pixel shaders, and many effects - bump mapping, custom texture mapping, and so on - but it only supports Shader Model 1.0 and it was not very user-friendly in the beginning because of the many API changes, and writing assembly shaders is no piece of cake.
DirectX 9 added support for HLSL (High Level Shader Language, which you learn about in this chapter) and new features with upcoming graphic cards like HDR (high dynamic range) rendering and multiple render targets, which allow techniques like deferred shading. DirectX 9 was very popular and many developers switched from OpenGL to DirectX just because OpenGL did not support shaders properly in the beginning and the OpenGL 2.0 Standard took way too long until it was approved. The extension model is also more inferior for shaders than using the Direct3D shader classes and it took a long time until better mechanisms were available in OpenGL. Direct3D 10 is also again much ahead of OpenGL, supporting Shader Model 4.0 and geometry shaders for more than a year now, and OpenGL has a lot of catching up to do. Direct3D 10 works currently only on Windows Vista, but you can still do some shader model 4.0 development with OpenGL extensions in XP. But most Windows game developers will switch to Vista just for Direct3D 10 in a few years. Contrary to the belief of some people there is no support for Shader Model 4.0 in DirectX 9, the so-called “DirectX 9.0L” is just a way to support DirectX 9.0 games and applications in Vista. In my opinion it is not good for game developers to have so many different versions of DirectX around, supporting DirectX 9 and falling back to older hardware, which can maybe only do DirectX 7 or 8, then having to implement Direct3D 10 for the newest effects in Vista and finally also make DirectX 9.0 an option for anyone running Windows XP. That does not sound like a joy ride. Luckily for you XNA does not have these problems; there is only one version available and it will just support Shader Models 1–3 (DirectX 8 and 9) and you don’t have to worry about anything else. It will run fine on Windows XP, Vista, and even on the Xbox 360.
For more information about the history of DirectX and the relationship to the Windows versions and the available hardware technology, see Figure 6-1. It shows that DirectX versions were usually bound in some way to the newest operating system from Microsoft and you can expect this to be true in the future too. DirectX 1 and 2 were not really used. DirectX 3 did still have support for the retained (high level) mode, but that mode was not used much by game developers and it was depreciated in the next versions. DirectX 5 came out together with Windows 98 and was more widely used (the DirectX 4 Version number was skipped). DirectX advanced almost every year by a version until DirectX 7, which was more widely used and the graphic hardware was becoming faster and faster every year. Together with Windows XP, DirectX 8 was released and introduced shaders for the first time. Half a year later DirectX 9 was already available and there were many versions. In the first years (2002, 2003, 2004) Microsoft named each new version with a new letter (DirectX 9.0a, DirectX 9.0b, and so on) and each version allowed the use of a new shader model for the newest graphic hardware; after 9.0c they stopped renaming the version, but many DirectX 9.0c versions (11 by now) were released, often only a very few months apart. Direct3D 10 has been in beta since late 2005 and will finally be released with Windows Vista in early 2007. Graphic hardware for Shader Model 4.0 was already available by the end of 2006 (Nvidia GeForce 8800).
You already saw in the previous chapter that shaders are essential to render anything in XNA; you could not even put a line on the screen without having a shader to output the pixels on the screen. Sure, you can use the sprite classes for 2D interface graphics and maybe you will also use the SimpleEffect class to display some 3D data, but I don’t like the SimpleEffect class and I will not use it in this book at all. I think it is too complicated to use and it really has no benefits when you are able to write your own shaders, which can be much simpler, faster, and more flexible. Using simple effects means you have to use the predefined features and if you need anything more you have to start over and write your own shader anyway. For the upcoming games in this book you will mainly use normal mapping shaders for displaying 3D data and then some cool post screen shader effects to manipulate the final screen data. Both of these shaders are not possible by just using the SimpleEffect class.
When you have worked with OpenGL or DirectX before and just used the fixed function pipeline, you might wonder why you put so much effort into shaders and why you can’t just render some 3D data on the screen and be done with it. Well, writing is a little bit harder than that and though you could just render .x models in DirectX without even knowing much about matrices or how the GPU works internally, you could not really modify much if you never learned about the details. It may be harder to learn all about shaders, but at least you get a much closer look at the graphic hardware and you will be able to understand much better how polygons are rendered on the screen, why certain bugs can occur, and how to quickly fix them.
After the next two chapters this book doesn’t talk about shaders anymore, and you will probably not spend too much time with shaders in your game after you have the important ones done at the beginning. Then you will just use the shaders and you won’t even have to think about them anymore.
One final thing I want to mention is the increased amount of work connected with the use of shaders. Not only you as a programmer have to learn all about shaders and you will probably write most of them unless you have a “shader guy” in your team, but your graphic artists (hopefully not you again) have to know about shaders too. For example, using normal mapping shaders to achieve great looking effects like in the following games (for example, Doom 3) make sense only if your 3D data consists not only of the basic geometry, but also provides you with the normal maps to achieve those effects.
Not to take anything away from the next chapter, which discusses the normal mapping effect in great detail, you usually need a high-poly version (with up to several million polygons) of a 3D object and a low-poly version (just a couple of hundred or maybe few thousand polygons) for rendering. As you can see in Figure 6-2, the low-poly object can look quite bad without normal mapping applied, but once it is activated (see right side), the sphere looks suddenly much better.
As an alternative to creating high-detail models, which are usually a lot more work, you can also use a paint program to “paint” your own normal map by first drawing a height map and then converting it into a normal map. Usually such maps look much worse and you will never achieve the graphic quality of top games with this approach, but from time to time you can see games implementing normal maps this way to save money. In any case your 3D graphic artist should be aware of all the new techniques and you should think about the way content is created before you start writing your engine or game code.
If you want to see shaders in action just play any recent game - they almost all support shaders and even if you don’t see heavy use of normal mapping and post screen effects, shaders might still be used for rendering the 3D models efficiently or executing shadow mapping techniques.
3D shooter games like Doom 3, Half-Life 2, Far Cry, and more recent games like Fear, Ghost Recon, and Gears of War show really top-notch graphics and make great use of shader effects. Shooters benefit greatly from most new graphic technologies because you see your surrounding world in so many detail levels. First you see yourself or at least your weapons, then you might be indoors and see walls and surrounding objects at a near distance, but there are also big rooms and even the outside world where you can see to great distances. This means levels are usually big and you are still able to look closely everywhere you want to because you can move wherever you want to. Obviously graphic cards, especially earlier ones, are not able to store that much detail in video memory and just mapping one big texture over the whole 3D level or just using the same wall texture all over the place is not really a good option. Instead the game developers have to think of many cool techniques to improve the visual quality by using detail maps, using normal mapping to make the player think geometry is much more detailed, and mixing in post screen effects to make the overall game appear more realistic.
Other games like strategy games or role playing games also benefit from shaders, but it took longer before strategy or role playing games even implemented shaders and these game focus more on the game play, how to render many units at once, and zooming in and out problems.
Anyway, you want probably cool shader effects in your game too and if you look at a screenshot of Rocket Commander (see Figure 6-3), you can see some normal mapping and post screen shaders at work. Without shaders enabled (left screen) the game looks really boring and asteroids don’t really look very real because the lighting is just not right. Please note that the non-shader version was just provided as an option to support older graphics hardware; the game was designed with shaders in mind. It would be possible to create a better screenshot for the fixed function version, but in real time the game would look even more unrealistic if the lighting does not change and if you don’t have any post screen effects.
For better examples between using shaders and the fixed function pipeline in games, take a look at water effects in 3D games. Water surfaces have evolved so much in the past few years that it is hard to keep up.
If you want to see more examples in games or have a little longer instruction I recommend that you watch my Rocket Commander Video Tutorials, which you can find on www.RocketCommander.com. Oryou can watch my XNA Racing Game Video Tutorials on www.XnaRacingGame.com. There are also more links to useful shader resource sites, more screenshots, and so on.