Rendering a Single Triangle Without Using the Fixed-function Pipeline

What exactly is the "fixed-function pipeline" that has been mentioned several times so far in this chapter? The fixed-function pipeline controls exactly how vertices are rendered, how they are transformed, how they are lit, everything. When you set the vertex format property on the device, you are telling the device that you want to render these vertices in a certain way, based on what the vertex format is.

One of the major drawbacks of this design is that for every single feature that a graphics card needs to have exposed, a corresponding fixed-function API must be designed and implemented. With the power of graphics hardware increasing quite rapidly (quicker than CPU chips for example), the number of these APIs would quickly get out of hand. Even if it were ideal to have this large number of hard-to-understand APIs, there is still the underlying issue of the developer not really knowing what's going on under the covers. If there's one thing that most developers (particularly game developers) want, it's complete control.

One of the first applications this book discussed was a single triangle spinning around the screen. This was quite simple to do using the fixed-function pipeline, but now you will see what it would take to use the programmable pipeline instead. Go ahead and create a new windows form application and get it ready for this graphics program by declaring the device variable, as well as the window style. You will then need the following variables added:

 private VertexBuffer vb = null; private Effect effect = null; private VertexDeclaration decl = null; // Our matrices private Matrix worldMatrix; private Matrix viewMatrix; private Matrix projMatrix; private float angle = 0.0f; 

Naturally, you will store the vertex data in the vertex buffer. Since you will not be using the transforms from the device, you will also need to store each of these transformation matrices to use in the programmable pipeline. The second and third variables here are completely new, though. The Effect object is the main object you will use when dealing with HLSL. The vertex declaration class is similar in use to the vertex format enumeration in the fixed-function pipeline. It informs the Direct3D runtime about the size and types of data it will be reading from the vertex buffer.

Since this application will be using a relatively new feature of graphics cards (namely the programmable pipeline), it's entirely possible that your graphics card will not support it. If that is the case, you will want to switch to the reference device that ships with the DirectX SDK. The reference device implements the entire API in software, albeit very slowly. You will need a more robust initialization routine for this, so use the one found in Listing 11.1.

Listing 11.1 Initializing Graphics with a Fallback
 public bool InitializeGraphics() {     // Set our presentation parameters     PresentParameters presentParams = new PresentParameters();     presentParams.Windowed = true;     presentParams.SwapEffect = SwapEffect.Discard;     presentParams.AutoDepthStencilFormat = DepthFormat.D16;     presentParams.EnableAutoDepthStencil = true;     bool canDoShaders = true;     // Does a hardware device support shaders?     Caps hardware = Manager.GetDeviceCaps(0, DeviceType.Hardware);     if (hardware.VertexShaderVersion >= new Version(1, 1))     {         // Default to software processing         CreateFlags flags = CreateFlags.SoftwareVertexProcessing;         // Use hardware if it's available         if (hardware.DeviceCaps.SupportsHardwareTransformAndLight)             flags = CreateFlags.HardwareVertexProcessing;         // Use pure if it's available         if (hardware.DeviceCaps.SupportsPureDevice)             flags |= CreateFlags.PureDevice;         // Yes, Create our device         device = new Device(0, DeviceType.Hardware, this, flags, presentParams);     }     else     {         // No shader support         canDoShaders = false;         // Create a reference device         device = new Device(0, DeviceType.Reference, this,             CreateFlags.SoftwareVertexProcessing, presentParams);     }     // Create our vertex data     vb = new VertexBuffer(typeof(CustomVertex.PositionOnly), 3, device,         Usage.Dynamic | Usage.WriteOnly, CustomVertex.PositionOnly.Format,         Pool.Default);     vb.Created += new EventHandler(this.OnVertexBufferCreate);     OnVertexBufferCreate(vb, null);     // Store our project and view matrices     projMatrix = Matrix.PerspectiveFovLH((float)Math.PI / 4,         this.Width / this.Height, 1.0f, 100.0f);     viewMatrix = Matrix.LookAtLH(new Vector3(0,0, 5.0f), new Vector3(),         new Vector3(0,1,0));     // Create our vertex declaration     VertexElement[] elements = new VertexElement[]             {                 new VertexElement(0, 0, DeclarationType.Float3,                 DeclarationMethod.Default,                 DeclarationUsage.Position, 0),                 VertexElement.VertexDeclarationEnd             };     decl = new VertexDeclaration(device, elements);     return canDoShaders; } 

The code here assumes the default adapter (ordinal zero) is the adapter you will be using to render the scene. If this isn't the case, feel free to modify this code to suit your graphics card.

For the sake of clarity, this code omitted a more robust enumeration routine and chose the default adapter. Before you can actually create the device, though, you will want to check the capabilities of the device that will be created, so you get the Caps structure.

In this application, you will only be using the programmable pipeline to render processed vertices, so you will need to ensure that the card can support at the very least the first generation vertex shaders. As new versions of the API are released, vertex and pixel shaders can have one or more generations added to them as a new "version" of the shader language. For example, the DirectX 9 release allows you to use vertex and pixel shaders up to version 3.0 (although currently there are no cards that support these versions). The first generation shader was naturally 1.0; however in DirectX 9, this version is obsolete, and has been replaced with version 1.1, which is what you can test against here.

Assuming the card does have this capability, you then want to create the "best" type of device possible. You default to software vertex processing, but if the device can support hardware processing and a pure device, you should use those instead. If the card does not have the capability to use shaders, you will instead use the reference device, which supports all features of the runtime, just very slowly.

Next you will need to create the vertex buffer, which should be quite familiar by now. You will want to render a single triangle, and you will only include the position of the vertices. The event handler method is as follows:

 private void OnVertexBufferCreate(object sender, EventArgs e) {     VertexBuffer buffer = (VertexBuffer)sender;     CustomVertex.PositionOnly[] verts = new CustomVertex.PositionOnly[3];     verts[0].SetPosition(new Vector3(0.0f, 1.0f, 1.0f));     verts[1].SetPosition(new Vector3(-1.0f, -1.0f, 1.0f));     verts[2].SetPosition(new Vector3(1.0f, -1.0f, 1.0f));     buffer.SetData(verts, 0, LockFlags.None); } 

Once the vertex buffer has been created, you will need to store the view and projection transformations for later use. You create these transformations in exactly the same way as you did before; the only difference is that now you won't be using the device's transform property to store this data.

Finally, you've come to the vertex declaration. The vertex declaration will tell Direct3D all of the information it needs to know about the vertices that will be passed on to the programmable pipeline. When creating the vertex declaration object, you pass in the device that will be used, as well as an array of vertex elements, each of which will describe one component of the vertex data. Take a look at the constructor for a vertex element:

 public VertexElement ( System.Int16 stream , System.Int16 offset ,     Microsoft.DirectX.Direct3D.DeclarationType declType ,     Microsoft.DirectX.Direct3D.DeclarationMethod declMethod ,     Microsoft.DirectX.Direct3D.DeclarationUsage declUsage ,     System.Byte usageIndex ) 

The first parameter is the stream where the vertex data will be used. When calling SetStreamSource on the device, the first parameter is the stream that the vertex buffer passed in will be associated with. Thus far, you have stored all of the data in one vertex buffer on one stream; however, it is entirely possible to render a single object with data from multiple different vertex buffers assigned to different streams. Since you will only have one vertex buffer at stream zero, you will use that value for this parameter.

The second parameter is the offset into buffer where this data is stored. In this case, there is only one data type that the vertex buffer contains, so this will naturally be zero. However, if there were multiple components, each one would need to be offset accordingly. For example, if the first component was position (three float values), with the second component the normal (also three float values), the first component would have an offset of 0 (since it's the first component in the buffer), while the normal component would have an offset of 12 (four bytes for each of the three floats).

The third parameter is used to inform Direct3D of the type of data that will be used here. Since you are simply using the position, you can use the Float3 type here (this type will be discussed later in the chapter).

The fourth parameter describes the method in which this declaration will be used. In the majority of cases (unless you are using high order primitives), you will use Default as was done here.

The fifth parameter describes the usage of the component, such as position, normal, color, and so on. The three floats will be used to describe position. The final parameter modifies the usage data to allow you to specify multiple usage types. In most cases, you will use zero here as well.

It's important to note that your vertex element array must have VertexElement.VertexDeclarationEnd as its last member. So, for the simple vertex declaration, you will use stream number zero, having three floats, which represent the position of the vertices. With the vertex element array created, you can create the vertex declaration object. Finally, you will return a Boolean value; true if a hardware device that can use shaders, false if this is a reference device.

With the initialization method out of the way, you will need to call it and handle the return value. Update the main method as follows:

 static void Main() {     using (Form1 frm = new Form1())     {         // Show our form and initialize our graphics engine         frm.Show();         if (!frm.InitializeGraphics())         {             MessageBox.Show("Your card does not support shaders.  " +                 "This application will run in ref mode instead.");         }         Application.Run(frm);     } } 

So what's really left? Rendering the scene, and that's about it right? You will use the same rendering method used throughout the book, so override the OnPaint method as follows:

 protected override void OnPaint(System.Windows.Forms.PaintEventArgs e) {     device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.CornflowerBlue, 1.0f, 0);     UpdateWorld();     device.BeginScene();     device.SetStreamSource(0, vb, 0);     device.VertexDeclaration = decl;     device.DrawPrimitives(PrimitiveType.TriangleList, 0, 1);     device.EndScene();     device.Present();     this.Invalidate(); } 

After clearing the device, you will need to call the update world method, which does nothing more than increment the angle variable and modify the stored world matrix:

 private void UpdateWorld() {     worldMatrix = Matrix.RotationAxis(new Vector3(angle / ((float)Math.PI * 2.0f),         angle / ((float)Math.PI * 4.0f), angle / ((float)Math.PI * 6.0f)),         angle / (float)Math.PI);     angle += 0.1f; } 

With that done, the rest of the code looks familiar, with the exception being that you set the vertex declaration property rather than the vertex format property. Running this application as it is, though, will get you nothing more than a pretty cornflower blue screen. Why, you may ask? The Direct3D runtime simply has no idea what you want it to do. You need to actually write a "program" for the programmable pipeline.

Add a new blank text file to your project called simple.fx. You will use this file to store the HLSL program. Once this file has been created, add the following HLSL code to the file:

 // Shader output, position and diffuse color struct VS_OUTPUT {     float4 pos  : POSITION;     float4 diff : COLOR0; }; // The world view and projection matrices float4x4 WorldViewProj : WORLDVIEWPROJECTION; float Time = 1.0f; // Transform our coordinates into world space VS_OUTPUT Transform(     float4 Pos  : POSITION) {     // Declare our return variable     VS_OUTPUT Out = (VS_OUTPUT)0;     // Transform our position     Out.pos = mul(Pos, WorldViewProj);     // Set our color     Out.diff.r = 1 - Time;     Out.diff.b = Time * WorldViewProj[2].yz;     Out.diff.ga = Time * WorldViewProj[0].xy;     // Return     return Out; } 

As you can see, the HLSL code is remarkably similar to the C (and C#) language. First, examine what this small section of code is actually doing. First you declare a structure that will hold the output data of the vertex program. The declarations of the variables are slightly different than normal, however. A semantic has been added to each variable. Semantics are tags that will indicate the usage class of the variable: in this case, the position and first color.

You may wonder why both the position and color are in the output structure when the vertex buffer only contains position data. Since the output structure contains both position and color data (along with the corresponding semantics), Direct3D will know to render with the new data returned from the vertex program, instead of using the data stored in the vertex buffer.

Next you have the two "global" variables; the combination of the world, view, and projection matrices, along with a time variable to animate the color of the triangle. You will need to update each of these parameters every frame while the application is running.

Finally, you have the actual vertex program method. As you can see, it returns the structure you've already created and takes in as an argument the position of the vertices. This method will be run for every vertex in the vertex buffer (all three of them in this case). Notice that the input variable also has the semantic to let Direct3D know the type of data it's dealing with.

Once inside the method, the code is quite simple. You declare a variable that you will be returning: Out. You transform the position by multiplying the input position by the transformation matrix that is stored. You use the mul intrinsic function here because the matrix is a float4x4, while the position is a float4. Using the standard multiply operator will fail with a type mismatch.

After that, you use a simple formula to set each component of the color to a constantly changing value. Notice how you are setting different components of the color in each line, with the red being set first, then the blue, then the green and alpha. With the position and color values having been set, you can return the now filled structure.

SHOP TALK: DECLARING VARIABLES IN HLSL AND INTRINSIC TYPES

You will notice that you are using some intrinsic types that don't exist in the C or C# languages, namely float4 and float4x4. The scalar types that HLSL supports are

  • bool true or false

  • int A 32-bit signed integer

  • half A 16-bit floating point value

  • float A 32-bit floating point value

  • double A 64-bit floating point value

Variable declarations of these types will behave just as they would in the C# language: a single variable of that type. However, adding a single integer value to the end of one of these scalar types will declare a "vector" type. These vector types can be used like a vector, or like an array. For example, look at a declaration similar to one used in the code:

 float4 pos; 

This declares a Vector4 to hold a position. If you wanted a 2D vector instead, you could declare this as

 float2 someVector; 

You can access the members of this variable like an array, such as

 pos[0] = 2.0f; 

You can also access this type much like a vector in your C# code, such as

 pos.x = 2.0f; 

You can access one or more of the vector components in this way. This is called "swizzling." You may use the vector components of xyzw, as well as the color components of rgba; however, you may not mix and match these in the same swizzle. For example, these lines are valid (even if they don't do anything):

 pos.xz = 0.0f; pos.rg += pos.xz; 

However, this line is not:

 pos.xg = 0.0f; 

The xg swizzle isn't valid since it's mixing both the vector and color components.

Variables can also have modifiers much like they do in C and C#. You can declare constants much like you would normally:

 const float someConstant = 3.0f; 

You can share variables among different programs:

 shared float someVariable = 1.0f; 

See the DirectX SDK documentation for more information on the HLSL if needed.



Managed DirectX 9 Graphics and Game Programming, Kick Start
Managed DirectX 9 Kick Start: Graphics and Game Programming
ISBN: B003D7JUW6
EAN: N/A
Year: 2002
Pages: 180
Authors: Tom Miller

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net