As with many organic effects, a low-quality ocean is easy to do, but good-looking results are really hard to achieve. In the end, it all depends on the amount of CPU cycles you can devote to rendering it. Will your game be a totally underwater experience such as Ecco the Dolphin, or will your sea play a secondary, purely decorative role? True-to-life oceans will be computationally expensive, so it is good practice to analyze your expectations before moving forward.
In its simplest form, an ocean is nothing but a textured, partially transparent plane. The texture will try to mimic the look and color of real water, so animating it will convey a higher sense of realism. But both the geometry of this water and its look are too simplistic. Oceans move and have complex interactions with light, which affect its appearance. Let's examine geometry and appearance in detail.
Realistic Ocean Geometry
To create realistic oceans, we will need to implement waves, sea spray, and so on. Several methods of increasing complexity have been devised through the years. One approach would be to use a simple sine wave as the wave function, thus creating a uniform wavy look. Be warned, though: Trigonometric functions are very expensive, so it's better to have them tabulated beforehand, thus avoiding costly computations for every frame.
Because oceans are rarely regular, a single sine wave will simply not cut it. Thus, two other approaches have surfaced. Using one approach, we can simulate ocean geometry by means of adding sine waves in an octave pattern, with each harmonic having double frequency and half amplitude. If we jitter these with some form of noise (Perlin noise, for example), the results will get better quickly. The second approach involves using mathematical models of the ocean, often implemented as particle systems: Each surface element is a particle connected by springs to neighboring elements. Then, using physically evolved models, we can simulate ocean dynamics efficiently. You can find an excellent overview of both methods, and a complete example of a mathematical ocean framework, in Lasse Staff Jensen and Robert Golias' Gamasutra article, quoted in Appendix E, "Further Reading."
Pure water is transparent, and thus its real-world look is usually affected by several factors: reflections, color tints, light phenomena, and so on. In fact, the look of water is fundamentally defined by its interactions with light and other media, not as much by the water itself. Thus, to convey the sense of real water we will need to study how it interacts with its environment.
From an optics standpoint, water is a highly reflective, partially transparent material. Its internal color is very subtle and only appears in large bodies of water. It can vary depending on its chemical microcomponents such as algae and bacteria, but is generally in the blue green range. Water has an index of refraction that distorts our sense of size, making objects look bigger than real life. Additionally, both the reflection and the transparency are very sharply defined, as in most liquids. This means objects put into or reflected in water will still look focused, as opposed to stained glass, which is transparent but makes light rays bend randomly, producing a blurred look. Only when waters of different temperatures are mixed do blurry reflections and refractions appear.
Thus, the first step toward realistic water is using environment mapping to reproduce the reflective nature of its surface. Environment mapping is a very well understood technique, and its technical details are exposed in Chapter 18, "Texture Mapping." We can use both spherical and cube mapping, the latter offering better results at the price of higher texture costs. The choice, as usual, depends on the effect you want to achieve. For an open-ocean game, quite likely a spherical map of the sky will be just fine. But if you want true reflections (in a lake surrounded by trees, for example), cube maps will be a better choice. Just remember that if you plan on computing realistic reflections, you will need to recompute the cube map per frame (or every few frames), and thus the computational complexity will grow. Make sure reflected geometry is simple, so the render-to-texture pass can be performed with little cost. Another alternative is to use an LOD strategy, so the reflected models are scaled-down versions of the geometry. This can be very effective, especially if the reflection index is not very high, so the viewer will not notice any difference at all.
The transparency index of open-sea waters varies greatly depending on the chemicals of the water and the sun's intensity. Generally speaking, visibilities of about 15 20 meters are considered normal, which gives a transparency index of between 73 and 80 percent per meter (80% of the light entering a volume of water one meter wide effectively makes it to the other side, the rest is absorbed).
But absorption of light is just one of the consequences of water transparency. The other one is refraction: the bending of rays due to differences in the density of the different media. Light speed in open air is significantly faster than in denser media such as water, and thus light rays bend as they enter the liquid. This can successfully be simulated using shader techniques (explained in the next chapter). Notice, however, that most games totally ignore refraction because it is hardly visible in gameplay scenarios. The only examples would be games where players get to cross rivers, or other situations where they are partially immersed in water. A fully underwater game, for example, does not need to simulate refraction because all the action happens in the same medium (water), and thus no rays are bent.
One of the few exceptions to this rule is the phenomenon called caustics: light concentration due to refraction, which causes dancing patterns of light in the bottom of the sea. These are often visible in anything from lakes to underwater games.
Caustics are produced whenever light travelling through different paths ends up converging on the same spot. Caustics appear because some semitransparent object acts as a lens, focusing the rays of light in a small area. The classic caustic example is the light spot caused by a lens on a sheet of paper or the apparently chaotic patterns formed by waves in the bottom of a swimming pool.
Unfortunately, realistic caustics are computationally prohibitive. They are usually computed using forward ray tracing with photon maps, which takes minutes or even hours to compute for a single frame. Thus, some simplifications must be implemented in order to approximate them in real time. We will see some algorithms that efficiently mimic the look of real caustics interactively. But none of them will really implement caustics analytically because the process would be too expensive. Let's begin by learning why.
In a realistic caustic simulation, a number of light rays are shot from the light sources in all directions. These rays or photons represent the amount of light being emitted by each lamp. Those rays directly hitting opaque objects are simply eliminated (whereas their lighting contribution is applied to the opaque object's surface). The vast majority of rays will thus disappear. But those rays hitting semitransparent objects such as glass or water will be propagated and entered into the new medium; their direction will change due to the difference in the speed of light between the "outside" medium (air, generally) and the "inside" medium (glass, water, and so on). This change of direction or bending is governed by Snell's Law, which states that:
Sin (incoming) / Sin (transmitted) = IOR
Snell's Law is depicted in Figure 20.8. These are the rays that will sometimes converge to create a caustic. Thus, we must follow them and assign them to surface points, so we can sum the contribution of the different rays and decide which pixels are affected by a caustic. The bad news is that this is a brute-force algorithm. We need millions of photons to reach a good level of realism, and that discards the technique for real-time use.
Figure 20.8. Snell's Law governs the bending of the rays as they cross media boundaries.
A different, more successful approach involves backward ray tracing using Monte Carlo estimation. Here, we start at the bottom of the sea, and trace rays backward in reverse chronological order, trying to compute the sum of all incoming lighting for a given point. Ideally, this would be achieved by solving the semispherical integral of all light coming from above the point being lit. But for practical reasons, the result of the integral is resolved via Monte Carlo sampling. Thus, a beam of candidate rays is sent in all directions over the hemisphere centered at the sampling point. In most cases, 16 rays per pixel suffice to achieve photorealistic results. Those that hit other objects (such as a whale, ship, or stone) are discarded. Then, rays that hit the ocean surface definitely came from the outside, making them good candidates. Thus, they must be refracted, using the inverse of Snell's Law. These remaining rays must be propagated in the air to test whether that ray actually emanated from a light source or was simply a false hypothesis. Again, only those rays that actually end up hitting a light source do contribute to the caustic, and the rest of the rays are just discarded as false hypotheses.
Monte Carlo methods are somewhat faster than regular forward ray tracing: They narrow the amount of rays to just a few million. But the number of samples still makes them too slow for real-time use. We can, however, simplify the backward Monte Carlo ray tracing idea to reach the desired results. We make some aggressive assumptions on good candidates for caustics, and thus compute only a subset of the arriving rays. Specifically, we compute one ray per pixel. Thus, the method has very low computational cost, and produces something that very closely resembles a caustic's look and behavior, even though it is totally incorrect from a physical standpoint.
To begin with, we compute caustics at noon at the Equator. This implies the sun is directly above us. For the sake of our algorithm, we need to compute the angle of the sky covered by the sun disk. The sun is between 147 and 152 million kilometers away from Earth depending on the time of the year, and its diameter is 1.42 million kilometers. Half a page of algebra and trigonometry yield an angle for the sun disk of 0.53°.
The second assumption is that the ocean floor is located at a constant depth. Don't forget that the transparency of water is between 77 and 80 percent per linear meter. This means 20 to 27 percent of incident light per meter is absorbed by the medium (heating it up), giving a total visibility range between 15 and 20 meters. Logically, this means caustics will be formed easily when light rays travel the shortest distance from the moment they enter the water to the moment they hit the ocean floor. Thus, caustics will be maximal for vertical rays and will not be as visible for rays entering water sideways. This is an aggressive assumption but is key to the success of the algorithm. Notice, as a simple proof, that caustics are extremely popular in tropical, sandy beaches. These are usually close to the equator and are indeed very shallow. So it seems we are heading in the right direction.
Our algorithm works as follows: We start at the bottom of the sea, right after we have painted the ground plane. Then, a second, additive blended pass is used to render the caustic on top of that. To do so, we create a mesh with the same granularity as the wave mesh and one that will be colored per-vertex with the caustic value: zero means no lighting and one means a beam of very focused light hits the sea bottom. To construct this lighting, backward ray tracing is used. For each vertex of the mesh, we project it vertically until we reach the wave point located right on top of it. Then, we compute the normal of the wave at that point using finite differences. With the vector and the normal, and using Snell's Law (remember the IOR for water is 1.33333), we can create secondary rays that travel from the wave into the air. Now, all we have to do is check which rays actually hit the sun after leaving the water. To do so, we compute the angle between the ray and a vertical ray using the dot product. We know the sun is 0.53° across, but its halo can be considered as a secondary light source, most days covering an arc 10 times larger. Then, we use the angle to index a texture map, which encodes the luminosity of the caustic depending on the angle (see Figure 20.9).
Figure 20.9. A texture map is used to bind ray angles to luminosities.
As a recap, the technique I have exposed basically converts a caustic computation problem into environment mapping, using the wave generator function to compute the texture coordinates. Luckily, the results (see Figure 20.10) are remarkably good.
Figure 20.10. Left: Wire frame vision, showing the samples at the bottom of the sea. Right: Solid view, showing the typical interference patterns found in caustics.
This technique can also be implemented using classic graphics programming. But it was designed from day one to be coded using pixel shaders for increased efficiency. To do so, we would compute the wave function and the backward ray tracing step in a shader, which also computes the derivative of the wave function. More details and a demo of this implementation can be found in the Gamasutra paper by Juan Guardado of NVIDIA and me, and is available on the Gamasutra web site."