The Concept Behind Photon Mapping

[ LiB ]

The Concept Behind Photon Mapping

The model for photon mapping utilizes the basic form of light transport using the old fundamental concepts of ray tracing. This way, you can keep the simplicity of distributing light because it follows the basic concept of how light really works in nature. Because the application you are developing isn't real-time, you want to render the scene after all the light has already propagated into the scene. All the dark areas will be lit and all the reflections and refrac-tions have already occurred.

The observer and the light sources are the two major elements you must take into account when rendering. If you were to render the scene from the observer's point of view, you would run into the same problems as seen in backward ray tracing. This was why bi-directional ray tracing was developed, which means you render the scene in two passes . You need to take the observer and the light sources into consideration when rendering because they both reveal very important information about the flow of light. The scene is rendered using the eyes of the observer and the lighting comes from the light sources. You must utilize both of these aspects to calculate the light-ing correctly. The light must be scattered (find the irradiance ) and then observed (find the radiance ).

Here is the fundamental process when rendering:

  1. As light is emitted from a light source, it is scattered by nearby objects. The light is then either absorbed into the object or reflected/transmit-ted to another object.

  2. Sample the scene from the observer's point of view after all the light has propagated into the scene. Then store the visual image of what the observer perceives on the image plane.

In the scene, surfaces are often rendered in a smooth shading manner because the intensity of light leaving each point (or reflected radiance) does-n't change much on large surfaces. For this reason, it would make sense to use and re-use the illuminative information stored at the surfaces in the scene. This would free up memory in the long run when rendering vast and complex models. This means that the illuminative information must be preserved and stored at the surfaces in the scene in order to re-use lighting information. This is what radiosity does by storing the illuminative values in each element of the subdivided mesh. But as you recall from Chapter 5, creating a subdivided mesh of the original geometry can pose a series of new problems. You certainly don't want to deal with meshing constraints and artifacts.

The first thing you need to do is to break apart the lighting information from the geometry, which means that you need to store the illuminative information in a separate entity without actually subdividing the original geometry. The illuminative information is not actually radiance; it is a delta that tracks the overall energy or photons arriving at each surface in the scene (or the irradiance).

In an unlit environment, everything looks pitch black and even the smallest amount of light contributes to lighting an environment. For example, a can-dle in a big room can send light waves in the most unusual places. The more photons that propagate into a given region, the higher the amount of energy or the higher the density of energy.

Because you are trying to simulate global illumination, you need to track energy everywhere in the scene and not only from the light source to a point on a surface. This is one way ray tracing is weak; it cannot calculate indirect illumination properly. It simply isn't a global illumination algorithm. Shadows look pitch black, thus creating very ugly results in renders .

One fix to this problem is using an ambient component for each surface so that every object in the scene is lit to some small degree. This helps in making shadows brighter but it is a quick and cheap fix. Another fix is to cast several rays from the light sources to guarantee a shadow is cast properly. The penumbra in some shadows just can't be simulated at times. These problems are mainly because ray tracing is a point-sampling algorithm. Radiosity is better than ray tracing at computing diffusive reflections. You want to be able to correctly simulate diffusive reflections across a given surface. You also want to copy the specular reflection functionality as seen in ray tracing because it's wonderful at computing this phenomenon .

[ LiB ]


Focus On Photon Mapping
Focus On Photon Mapping (Premier Press Game Development)
ISBN: 1592000088
EAN: 2147483647
Year: 2005
Pages: 128
Authors: Marlon John

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net