Upgrading cScene for Ray Tracing

[ LiB ]

Upgrading cScene for Ray Tracing

You're going to add ray tracing and photon mapping to the cScene class, so you must differentiate between the two by code. This will be conducted by assigning two constants to a member variable called Render_Algorithm . For now, this variable will be set to the ray tracing flag . You are going to then add a few new functions for the ray-tracing algorithm. These methods will implement ray tracing in full detail. The rendering settings are read in when you load the scene definition file; keep this in mind as you branch your ray-tracing code for specific lighting effects. The new code is shown in bold:

 #ifndef _C_SCENE_H #define _C_SCENE_H // 50 objects at a time to re-allocate #define MEMORY_OBJECTS_CHUNK   50 // Five lights at a time to re-allocate #define MEMORY_LIGHT_CHUNK        5 #define RENDER_WIDTH           800 #define RENDER_HEIGHT          600 #define BUFFER_LENGTH           80  // New Code   #define   ZERO_TRACE_DEPTH      0   #define   MAX_TRACE_DEPTH       8   #define   RAY_TRACING           1   #define   PHOTON_MAPPING       2  class cScene { public:    cScene();    ~cScene(); // scene load and destroy  void   Init_Scene(); // will load scene  void   Kill_Scene();    // light methods    void   AddToObjectList(cObject ** pList, long *lCount, cObject tObject);    void   AddToLightList(cLight ** pList, int *nCount, cLight tObject);    // color methods    void   PlotPixel(int x, int y, color3 color);    void   FillColor(color3 color);    // camera methods    void   Setup_Camera_Coords( cVector3 * X_Axis,       cVector3 * Y_Axis, cVector3 * Z_Axis );    // read methods    bool  Read_In_Scene(char * filename);    // scene primitives    bool      Read_Camera(ifstream& is);    bool      Read_Light(ifstream& is);    cMaterial  Read_Material(ifstream& is);    bool      Read_Sphere(ifstream& is);    bool      Read_Triangle(ifstream& is);    // attribute primitives    cVector3   Read_Vector(ifstream& is);    float      Read_Scalar(ifstream& is);    color3      Read_Color (ifstream& is);    void       Read_Comma (ifstream& is);     // render settings     class Render_Settings   {        public:              Render_Settings()              {                   use_diffuse_shading = false;                  use_diffuse_reflection = false;                  use_specular_reflection = false;                  use_refraction = false;                  use_specular_hightlights= false;              }    public:               bool     use_diffuse_shading,                        use_diffuse_reflection,                        use_specular_reflection,                        use_refraction,                        use_specular_hightlights;       };    // lighting parameters from file     bool       Read_Render_Settings(ifstream& is);  // New code for Ray Tracing   int      Find_Closest_Object(cRay ray, cVector3 * cHitPoint);   void      BeginTracing();   bool      Tray_Ray_In_Scene(cRay ray, int depth, color3 *color);   color3     ShadePoint(cVector3 hit_point, cVector3 normal, int iObject);  public:    cObject  * pObjectList;    long       nObjectCount;    cLight    * pLightList;    int       nLightCount;    color3    ColorBuffer[ RENDER_WIDTH * RENDER_HEIGHT ];    cVector3  CameraPosition,                 Look,                  Up,      Viewing_Direction;    float   Viewing_Distance;    bool   isDataReady;    long   triangle_count,       sphere_count;    Render_Settings RenderSetting;  // New code for chapter 9   int Render_Algorithm;  }; #endif 

Initializing the Scene

You need to prompt the user to manually load the scene from file. The application will ask the user for the path and filename of the scene definition file. The path can be relative ( MyFolder\File.txt ) or absolute ( X:\File.txt ). After the user inputs the path and filename, the program begins to load each ele-ment of the scene definition file into the scene structures. All the lights and objects are loaded into their appropriate arrays. The rendering settings are defined and the camera is set up; the viewing direction and position are defined. The application is then notified that it will be using the ray-tracing algorithm. The information is stored in the Render_Algorithm variable. The scene is now ready for the rendering algorithm.

 void cScene::Init_Scene() {    char buffer[256];    cout << "Please Type in Path and Filename\n";    cout << "Usage (ex. C:\sample.txt)" << endl;    cout << "Enter  Information: ";    cin  >> buffer;    Read_In_Scene(buffer);    // Ray Tracing and Local Reflection    Render_Algorithm = RAY_TRACING; } 

Finding the Closest Intersection

The first thing you need to do is develop a method that tests for ray to object intersections. This method will pass a ray structure and a vector pointer in as two parameters. It returns the index of the object that intersected with the ray. The ray is the input, and the outputs are the returned integer (index of object hit) and vector pointer (point of intersection). This method's sole purpose is to search through each object in the scene and determine whether the incoming ray has intersected a point on any one of the objects in the scene. This method also calculates which object is intersected first by the ray. It then updates the vector pointer to the new closest point of intersection and returns with the object index.

 int cScene::Find_Closest_Object(cRay Ray, cVector3 * hit_point) {    int i, hit_object, TempHit;    float Last_Shortest_Distance;    Last_Shortest_Distance = 9999999.0f; // make very big    hit_object = -1; // this is an invalid number    // for n amount of objects we have    for(i = 0; i < this->nObjectCount; i++)    {      if( pObjectList[ i ].Default_Primitive != OBJECT_NOTHING)      {         // Get temp hit        TempHit = pObjectList[ i ].Find_Hitpoint( &Ray );        // If temp hit is valid and this ray distance is less        // than the previous distance let's save it.        if( TempHit && ( Ray.distance < Last_Shortest_Distance ) )        {          // save           Last_Shortest_Distance       = Ray.distance;          *hit_point            = Ray.destination;          hit_object            = i;        }      }    }    return hit_object; } 

Beginning the Tracing Process

All the rays being traced from the image plane into the scene are listed in one major function called BeginTracing() . You must trace each ray into the scene through each pixel on the view plane. See Figure 9.6.

Figure 9.6. The Begin Tracing method traces rays from the image plane into the scene.

graphic/09fig06.gif


This requires a ray structure that is used to tell the origin, direction, and destination of the ray. You must also create a color3 structure, which will hold the returned illumination information after the ray has intersected a point on a surface and has been shaded accordingly . In order to guide each ray in the correct direction for each pixel on the image plane, you need to set up the local X, Y, and Z axes of the camera. Again the camera is always relative to the world axes and must be kept orthogonal. You not only need to set up the local coordinate system for the camera but must also guide each ray for each pixel in accordance with the camera position and the distance to the view plane. As all these procedures occur, a progress report is printed to the console screen that reports the number of columns on the screen that have rendered to the image plane. This keeps the user informed of what's happening in the application.

In order to accommodate the infinite orientations and positions of the camera, you must calculate the camera coordinate system. Each ray origin is set to the sum of the Z axis vector, camera position, and distance to the view plane (viewing distance). This creates the field of view. Each pixel position on the image plane is set as the direction the ray is going to take. In order to guide the ray throughout the world and guarantee it travels orthogonally, the ray's direction is multiplied by the X, Y, and Z camera vectors.

If the returned information is valid from the Tray_Ray_In_Scene() method, the ray has been traced successfully. The ray has intersected a point on a surface and must be shaded accordingly. When the point on the surface has been shaded, you must then saturate the returned color so each red, green, and blue component falls in the unit range of (0.0 to 1.0). Finally, the application requests the path and filename you want to create on your local file system. After you have entered this information, the application will begin processing the color buffer. It plots each pixel into the color buffer and finally saves the whole RGB color buffer (image) to a *.PPM formatted file that you've selected on your local file system.

The logic behind BeginTracing() is that, for each pixel on the image plane, the program picks a ray from the eye through this pixel where the pixel color is assigned from the information that is called by Tray_Ray_In_Scene() . The number of rendered columns is then printed to the console screen. Here's the code:

 void cScene::BeginTracing() {   // Error Checking    if ( this->nLightCount == 0 && this->nObjectCount == 0 )    {       printf("Error, no objects and lights loaded!\n");       return;    }  cRay           Ray;  color3            color = color3::Black;  cVector3       X_Axis, Y_Axis, Z_Axis;  cVector3        OffsetX, OffsetY;  int         Half_Width, Half_Height,          X,Y;  int            w,h;   // get half of width and height   Half_Width      = RENDER_WIDTH / 2;   Half_Height     = RENDER_HEIGHT / 2;     // setup ray based on camera vectors     Setup_Camera_Coords( &X_Axis, &Y_Axis, &Z_Axis );     // ray must begin some distance into the scene where     // the image plane is located     Ray.origin = CameraPosition + ( Z_Axis * Viewing_Distance) ;      // for each pixel on the image plane, spawn a ray       for(w = 0; w < RENDER_WIDTH; w++)       {        // report progress to console screen        cout << "Columns:" << w << " of " << RENDER_WIDTH-1 << endl;           for(h = 0; h <RENDER_HEIGHT; h++)           {               X = w - Half_Width;               Y = h - Half_Height;               // calculate offset for the ray on the X axis              OffsetX = X_Axis * (float) X;              // calculate offset for the ray on the Y axis              OffsetY = Y_Axis * (float) Y;              // calculate ray direction based on the              // computed x,y and z              Ray.direction = (( OffsetX + OffsetY )              + this->CameraPosition );              // now position it to where the              // image plane is located              Ray.direction = Ray.direction - Ray.origin;              // normalize the ray (unit vector)              Ray.direction.Normalize();              // trace the ray into the scene              if( Tray_Ray_In_Scene                  ( Ray, ZERO_TRACE_DEPTH, &color) )              {                  // saturate return color if it falls outside                  // the 0.0 to 1.0 unit range                  color.Sat();                  // save color to image buffer                  PlotPixel(w,h, color);              }          }      }     // write ppm     char buffer[255];     cout << "Please Enter output Path and Filename:" << endl;     cout << "Usage (c:\example.ppm): ";     cin >> buffer;    WritePPM(buffer, ColorBuffer, RENDER_WIDTH, RENDER_HEIGHT ); } 

Tracing the Ray Through the Scene

The second part of the process involves actually tracing the ray throughout the scene. See Figure 9.7. The journey the ray takes must be calculated and recorded. After the ray leaves the image plane, the next thing you must do is find the closest intersection. This is done by calling the Find_Closest_Object() method. The method will return with the closest object found via a pointer to the closest intersection point.

Figure 9.7. The movements of rays leaving the image plane that intersect objects in the scene.

graphic/09fig07.gif


The next step of the algorithm is to shade the intersection point. A color structure is declared to store and return the final pixel intensity. The shading is done in a separate method called ShadePoint() . This part of the application is the brain of the operation because all of the lighting logic as well as the reflection and refraction capabilities are performed here. The color of the object is needed to shade a point and is applied to the color structure. The ambient coefficient is also summed into the solution. The diffuse interaction and specular highlight contribution is returned from the shade point method and is summed into the color structure. The RGB components are then saturated so that each component falls within the unit range of 0.0 to 1.0. After all these coefficients are applied to the color structure, the information is returned.

The logic behind Tray_Ray_In_Scene() is that it finds the closest intersection in the scene, computes the intersection point and normal, and calls ShadePoint() to shade the desired point for the different natural lighting effects.

 bool cScene::Tray_Ray_In_Scene(cRay ray, int depth, color3 *pColor) {    cVector3 Hit, Normal;    int      iClosestObject;     color3     color_Object    = color3::Black,           intensity_Shade = color3::Black;    color3     color         = color3::Black;    // find closest object    iClosestObject = Find_Closest_Object(ray, &Hit);    // get object normal    Normal        = pObjectList[iClosestObject].Normal(Hit);    // if object is in the local array list    if( iClosestObject > -1 )    {        // object color        color_Object = pObjectList[iClosestObject].Color;        // BEGIN RAY TRACING        if ( this->Render_Algorithm & RAY_TRACING )        {               // Ray Tracing uses an Ambient shading               // to ensure objects are lit               color =                  color * pObjectList[iClosestObject].fAmbientFactor;          // diffuse shading & specular highlights          intensity_Shade =          ShadePoint( Hit, Normal, iClosestObject );          // combine our information          color.r += color_Object.r * intensity_Shade.r;          color.g += color_Object.g * intensity_Shade.g;          color.b += olor_Object.b    ntensity_Shade.b;    }    // Saturate colors out of range    color.Sat();    // Save Color        *pColor = color;    }    else    {       *pColor = color3::Black;       return false;    }    return true; } 

Shading the Point of Intersection

This final part of the process shades a point in 3D space and projects it on to the view plane. The shade point method calculates the radiance of a point. The function cycles through each light source and applies the diffuse interactions and specular highlights accordingly. The rays that leave the image plane and hit objects are called primary rays . As these rays intersect points on surfaces, they are shaded using the shade point method. A point is shaded as a result of rays being shot directly from the view plane and not from any other surface; this process is called direct illumination or local illumination . When rays hit objects from different surfaces other than the eye, this is called secondary illumination .

This information is relative to the type of ray tracing you use. For example, in forward ray tracing, direct illumination refers to rays being shot directly from the light source instead of the eye as in backward ray tracing. The shade point method calculates the shading for the diffuse surface as well as applying the specular highlights. It also includes the different light sources' contributions (wattage and color) to the shaded point. It is important to point out that, as of now, light comes directly from the eye and is shaded relative to the light source. The light doesn't get reflected/transmitted to other surfaces. For this reason, you don't need a recursive algorithm because reflection and refraction require the recursive functionality because of secondary rays being launched at the point of intersection. As the color structure is shaded to the different types of effects, it is then returned to the caller function with the new final shade. The shade method includes the specular highlights and the diffuse interaction contribution for the point.

The logic behind ShadePoint() is that it sets the initial color to black. It then cycles through each light source in the scene and applies the color of the object and the different natural light coefficients for diffuse shading, specular highlights, and the light source color and wattage for the intersection point.

 color3 cScene::ShadePoint(cVector3 hit_point, cVector3 normal, int iObject) {     //Get normal     cVector3 From_Light_To_Hitpoint, Shininess;     int i;     float fLen, fAngle;     // color is black by default     color3 color;     // set color to black     color = color3::Black;     // for each light     for( i = 0; i < this->nLightCount; i++ )     {      // is light okay to use?      if( pLightList[i].bActive )      {         // from light to hit point            From_Light_To_Hitpoint = pLightList[ i ].vPosition - hit_point;         fLen = From_Light_To_Hitpoint.Mag();         From_Light_To_Hitpoint.Normalize();         // find angle between surface normal and light direction         fAngle = ( From_Light_To_Hitpoint * normal );         // if visible from lightsource to surface         if( fAngle > 0.0f )         {            // do diffuse interaction on surface           if( RenderSetting.use_diffuse_shading              && pObjectList[iObject].fDiffuseFactor> 0.0f )           {              color.r +=              ( fAngle * pLightList[ i ].Color.r ) * pLightList[ i ].fWattage * pObjectList[iObject].fDiffuseFactor ;              color.g += (  fAngle * pLightList[ i ].Color.g ) *    pLightList[ i ].fWattage *   pObjectList[iObject].fDiffuseFactor ;              color.b +=             (  fAngle * pLightList[ i ].Color.b ) *        pLightList[ i ].fWattage *      pObjectList[iObject].fDiffuseFactor ;           }          // do specular highlight          if( RenderSetting.use_specular_hightlights               && pObjectList[iObject].fShininess > 0.0f )          {              Shininess = ( normal * ( 2.0f * fAngle ))     From_Light_To_Hitpoint;              float Dot = ( Shininess * this->Viewing_Direction);             if( Dot > 0.0f )              color += pLightList[i].Color *               (    (float)pow( Dot,      pObjectList[iObject].fSpecularExponent )                    * pObjectList[iObject].fShininess );          }       } // end visible     } // end light on   } // end number of lights   return color; } 

[ LiB ]


Focus On Photon Mapping
Focus On Photon Mapping (Premier Press Game Development)
ISBN: 1592000088
EAN: 2147483647
Year: 2005
Pages: 128
Authors: Marlon John

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net