The Particle Data Structure

Defining the particle data structure means answering the "what" question: What are we modeling? Is it going to be fire, smoke, or water? Clearly, different phenomena will require different particle parameters.

When choosing the particle data structure, it is essential to create a parameter set that is both sufficient but not bloated. Given the parallel nature of particle systems, having superfluous parameters will increase the memory footprint, whereas having too few control parameters will degrade the system's visual quality.

Particle systems have traditionally dealt with fast moving items with no interaction between them. Be it explosion sparks, rain, or water, the underlying structures are almost identical. Here is the original particle structure taken from the seminal Reeves SIGGRAPH Paper (see references in Appendix E, "Further Reading"):

  • Position

  • Velocity (vectorial)

  • Size

  • Color

  • Transparency

  • Shape

  • Lifetime

This structure was used to create some rather advanced effects in Star Trek II: The Wrath of Kahn. We can broadly divide the parameters into two different groups:

  • Parameters associated with the behavior of the particle

  • Parameters associated with the look of the particle

Behavior parameters should be sufficient to implement a compelling particle simulator. Thus, it is essential to analyze the kind of behavior we are trying to simulate, to understand which parameters are needed. The best place to gather this information is from physics handbooks, especially those dealing with dynamics for particles or rigid bodies. Study the forces you will apply to the particles. Do you need friction? Speed? Acceleration? Try to understand the role of global-scope constants. For example, it makes no sense to have each particle store the gravity constant. That value will be the same in all of them. On the other hand, the weight of each particle might be useful in some circumstances.

Once you have determined which behavior parameters your simulator needs, you will need to specify those parameters dealing with the look of the particle. Here the possibilities are almost endless. You will definitely need a particle color, blending mode, size, and texture identifier, but most particle systems need many more parameters. For solid particles, you might need to specify which mesh you will use to paint particle animation counters, halos, and dozens of other features. Here the best advice is to decouple the particle rendering from the simulation loop, so you can test painting a single particle in a separate program. This way you can refine the visual result and decide which parameters your particle renderer needs to convey the right look.

A Generic Particle System

Now that we have a global understanding of what a particle system is, let's take a closer look at how we can implement it. Consider the following class definition for a basic particle system:

 class particlesystem    {    particle *data;    int numparticles;    public:       void create(int);   // creates a system of n particles       void recalc();       void render();    private:       void spawn(int);   // spawns particle n       void affect(int)   // affect particle n by outside forces    }; 

Notice how we need a spawn and affect routine on a particle basis. These two routines implement the core of the system: how particles are born and what simulation process they undergo. These two routines can actually be part of a particle class, if needed, to further clarify the code.

Spawning Particles

Particles are created at some kind of emitter, which initializes their parameters. If we want our system to behave in any interesting way, the key is to generate each particle with slightly different initial values, so the behavior rules (which are shared by all particles) make each one look like a unique element.

A variety of emitters have frequently been used. The most well known is the point emitter, which generates all particles at a point in space, as in an explosion. But explosions are not born in a single point in space. In fact, only the Big Bang happened this way, and we're not even sure about that. Explosions in the real world have volume and shape, as particles of fire emerge from the exploding object. Thus, sometimes we will perturb the point, so particles are born in an area surrounding it, but not quite at the point. This is a very common approach in particle system creation: adding an amount of randomness so results do not look too regular and algorithmic. For example, for our point emitter, we would use something like this:

 point pos(3,5,4); // particles are born in 3,5,4 pos.x += ((float)rand()%2000-1000)/1000; pos.y += ((float)rand()%2000-1000)/1000; pos.z += ((float)rand()%2000-1000)/1000; 

The last three lines implement a distortion of one in any direction, positive or negative. So in fact the volume particles emerge from a one-sided cube centered at 3,5,4. This philosophy is so deeply rooted within this discipline that it even has a name: jittering, which describes the addition of controlled amounts of noise to reduce the algorithmic look of the simulation.

There are also other shapes of emitters. To simulate snow or rain, for example, you will probably use a 2D polygon aligned in the XZ plane. Here is the source code to compute such a polygon:

 point pos(3,5,4); // particles are born in 3,5,4 pos.x += ((float)rand()%2000-1000)/1000; pos.z += ((float)rand()%2000-1000)/1000; 

In this case I have created a square, which is very common. Other 2D shapes could be implemented as well.

A third type of emitter is referenced with regard to the player's position. Imagine that you need to simulate rain. Quite likely, you will not fill the whole game level with hundreds of thousands of individual raindrops. It simply makes no sense because distant raindrops will not be visible. In this scenario, it would be great to generate rain right where the user can notice it most, which is directly in front of him. Thus, we generate rain in our regular coordinates, and then translate and rotate to the particle system's final position and rotation. Here is an example of how to generate these particles. I assume fov is the horizontal aperture of the camera, and distnear and distfar are the range of distances we want to fill with particles:

 float dist=distnear+(distfar-distnear)*((float)rand()%1000)/1000; float angle=fov*(((float)rand()%2000)-1000)/1000; point p(dist*cos(angle),0,dist*sin(angle)); p.rotatey(camera_yaw); p.translate(camera_position); 

This code generates particles right in front of the camera, so they fill the screen but don't go anywhere else. Only the particles we really need are taken into consideration.

Another type of emitter is the screen-based emitter, which is used for particle effects that are computed in screen space, such as the trail of water left by a raindrop on the camera's surface. These particles are born anywhere, but are always referenced to the camera's viewport. So, generating them is a bit different. A 2D particle system is rendered with the 3D pipeline switched off, much like a sprite engine from the old days. The generation of the particles is not very complex; it is just particles on a 2D rectangle. Here is the code:

 pos.x += SCREENX*((float)rand()%2000-1000)/1000; pos.y += SCREENY*((float)rand()%2000-1000)/1000; 

This code assumes that SCREENX, SCREENY hold the resolution in pixels on the screen. Obviously, most of the complexity of these systems is not in the spawning of particles, but in the actual rendering. You can see different types of emitters in Figure 19.1.

Figure 19.1. Emitters for different effects.

graphics/19fig01.gif

Once we have the emitter, it is time to move on to the next parameters. Initial velocity should be determined. Some popular choices here are a directional velocity (all particles move in the same direction), a radial velocity (particles move away from center), rotational (as in a tornado), or random. Each of these should be implemented with a certain degree of jitter, so different particles have slightly different parameters and hence evolution.

Other parameters can be tuned as well: Color, alpha (very important if particles are to fade away as they die), texturing coordinates, and so on must all be set. Additionally, we need to establish the particle's age and life cycle parameters. These can be implemented in two ways: You can set the particle's age to zero and have a second parameter that is the time to live of the particle. At each simulation tick, you increase the age until the time to live (which is nothing but a counter) is reached, and the particle is killed and respawned elsewhere. Because simulation ticks are usually fixed in length, we get device-independent speed.

An alternative is to establish not the age, but the instant at which the particle was spawned. You would get that from the system with a call to timeGetTime(), store it, and then use it for age computations. Basically, at each loop you would recompute the current time with timeGetTime(), subtract the time of birth from this second value (thus computing the age in milliseconds of the particle), and compare it with the time to live, which is also a time period not only a loop counter as in the previous case. As usual, different programmers prefer one approach or the other for personal reasons.

Particle Behavior

If particle structures try to define what we are going to simulate, the behavior engine should attempt to mimic how that specific phenomena evolves over time. Will it be a physically based simulation, or will it be stochastic? The most common choice is to implement some kind of dynamics to the particles so they mimic real-world phenomena.

Let's take a look at some examples of increasing complexity. To begin with, imagine that we are to render a rain particle system, trying to get a convincing heavy rain effect. Raindrops move at a very high speed as they fall to the ground, so we can make the assumption that they are unaffected by wind or other external forces. Making a fast object change its direction is hard, because the force we need to supply to it increases with speed. Thus, our raindrops are spawned with an algorithm similar to those explained in the previous section, and we only need to recompute their position.

Now, from very basic physics you know that

 Position = initial position + velocity*time 

Taking this as a differential equation for a brief period of time, we can rewrite it to form

 dPosition = Velocity*dt 

where dPosition is the change in position, Velocity is the instantaneous velocity at a given point in time, and dt is the differential of time we want to evaluate position differences in. Here we are assuming that Velocity is valid all through dt seconds, which is generally false. We are taking one sample and generalizing it to a whole time interval. This makes these kinds of simulators, called Euler Integrators, very unstable in more complex systems. But for particle systems, these are the way to go because of their simplicity and elegance. So how do we convert this into running code? We substitute dt for the differential between successive simulation ticks and use the current velocity as the speed parameter. Now, we can incorporate basic Newtonian physics into the equation. Let's review how this might work, starting with the equation

 F=m*a 

which turns into

 a=F/m 

But still acceleration is the second derivative of position, as in

 a=d2x/dt2 

So now we have the basic relationships between cinematics (acceleration) and dynamics (forces). The basic laws of cinematics can then be rewritten as

 v=dx/dt a=dv/dt 

And thus all we have to do is represent forces and compute accelerations based on them. Take, for example, gravity, as governed by the expression

 f=m*g 

where g=(0,-9.8,0). Other, more interesting forces can be represented as well. Here are the expressions for viscous drag, which is caused by a projectile trying to cross a medium that offers some resistance. Viscous drag is proportional to speed, so faster objects have a bigger force opposing them. In this equation, kd is a medium-dependent constant called the drag coefficient:

 F = -kd * (dx/dt) = -kd * v 

A third, interesting law governs the behavior of particles connected by elastic springs. Each spring has an ideal length, so if we compress it by moving the two particles closer to this rest length, the force will try to separate them. If we try to separate them beyond the rest length, the force will oppose that as well. This law is called Hooks Spring Law and is very interesting because it is the starting point of many interesting physics simulation techniques. Here is the complete equation:

graphics/19equ01.gif


It's an impressive piece of mathematics, but don't be put off by it. Our procedure will always be the same. Several types of forces are pictured in Figure 19.2. Here is the overall algorithm to compute physics-based particle systems:

  1. Compute the sum of forces interacting with the particle at a given point in time.

  2. Derive accelerations from forces.

  3. Use time differentials to compute changes in position and velocity using Euler integration.

Figure 19.2. Some possible forces that could act on particles.

graphics/19fig02.gif

Here is, for example, the code required to implement a behavior engine that takes gravity and a constant lateral force (such as a wind vector) into consideration. This can be used to create a geyser kind of effect:

 for (i=0;i<num_particles;i++)    {    elapsed_time=(timeGetTime()-time_last_call)/1000.0;   // in seconds    // first, compute forces    point gravity_force=particle[i].weight*gravity;    point wind_force=(...) // compute wind    point total_force=gravity_force+wind_force;   // resulting force on particle    // second, derive accelerations    point accel=total_force/particle[i].weight;    // third, integrate    particle[i].velocity+=accel*elapsed_time;    particle[i].position+=particle[i].velocity*elapsed_time;    } 

So it is actually pretty easy to create particle-based physical simulations. However, many behaviors are defined aesthetically, not physically. Keep that in mind when you create your particle system. Take, for example, a smoke column. Smoke trajectories are really hard to simulate physically. The smoke's direction depends on temperature, wind, the chemicals contained in the smoke, and a host of factors we simply cannot take into consideration in a real-time game. So, smoke is usually simulated aesthetically, not physically. If you think about it, a smoke plume is basically a series of smoke particles (each rendered nicely with a smoke texture quad). These particles emit from a single point with a velocity in the positive vertical direction. As they rise, they are affected by complex forces, which makes the shape of the column somehow interesting but also hard to simulate. Here is a pretty popular smoke simulator:

 for (i=0;i<num_particles;i++)    {    elapsed_time=(timeGetTime()-time_last_call)/1000.0;   // in seconds    // first, compute forces    point wind_force=noise(particle[i].position);    point raise_force(0,1,0);   // due to temperature smoke always rises    point total_force=wind_force;   // resulting force on particle    // second, derive accelerations    point accel=total_force/particle[i].weight;    // third, integrate    particle[i].velocity+=accel*elapsed_time;    particle[i].position+=particle[i].velocity*elapsed_time;    } 

In this simulator, noise() is a call to the popular Perlin noise routine, which generates continuous noise in a 3D space. Noise allows us to create patterns of movement, which somehow look like smoke, swirling and rising from the ground. Obviously, this approach is purely aesthetics driven, but after all, games are an aesthetics-driven industry at the core.

Particle Extinction

Particles in a system are not meant to live very long. After their life cycle is depleted, they are deleted and respawned at the emitter. This usually means the particle has crossed the screen or performed a full cycle and can thus be reentered in a loop. Some particle systems, with explosions being the obvious example, will be nonlooping. Particles are created, live, die, and then the particle system as a whole is shut down. Even so, we will now focus on those systems where particles must be regenerated to understand the code required to do that.

Generally, a particle dies after its age surpasses its time to live. When this moment arrives, all we have to do is call the spawn routine again, passing this particle identifier as a parameter. By doing so, we get a new particle with fresh parameters where we had the old one. Thus, no memory deletion or reallocation is actually needed. All we do is recycle that position in the array for a new element.

From an aesthetics point of view, it is very important to fade particles out somehow as they approach their death. Failing to do so would make particle disappearance look annoying because bright particles would suddenly disappear. A number of techniques can be used to handle this situation properly. In a rain particle system, for example, raindrops are not killed by age, but are killed whenever they cross the ground plane. Thus, there is no possibility of particles "popping out" of the scene, because they will have crossed the ground floor in the frame right before their destruction.

Take a smoke column, for example. Here, we need to cleverly use alpha values to fade particles in and out. Particles will be opaque in their heyday and then will be born and die almost transparently. Notice that the same care we take with the death of the particles must be taken with their birth as well. We do not want particles to pop in suddenly out of nowhere. A nice trick to ensure that particles alpha blend properly when they enter and leave the stage is to modulate their alpha as a sin function, such as

 alpha=sin(PI*age/maxage); 

The argument to the sin call returns a floating-point value in the range from 0 to Pi. Then, the sin function evaluates to zero (thus, completely transparent) at these two values, and rises smoothly to 1 in between to represent the opacity at the particle's middle of life.

Rendering Particles

The believability of a particle system depends on rendering as much as it depends on the actual simulation of the behavior. We can change a mass of people to flowing water by just adjusting our particle rendering engine.

Given the tremendous amount of phenomena that can be modeled using particles, it shouldn't be surprising that many different rendering approaches exist. But there are some general tips to follow, which are covered in the following sections.

Compute Particles Cheaply

Rendering particles can become troublesome, especially when dealing with large numbers of elements. Don't forget we need to rotate and translate each individual particle so it faces the viewer and gives the proper illusion. Rotating has a cost, especially when you do it many times. We could choose to render particles one by one, rotating each one with a matrix stack consisting of rotations and translations. But because transforms can only be performed outside of rendering portions, we would need to render particles separately, and thus eliminate the option of sending them in batches, which is always more efficient. Thus, different methods will be used. A first approach is to align them to the camera yourself by designing a right and up vector, and then defining particles based on them. In a world with yaw only, a right vector (with regard to the camera and screen) can be defined as

 point right(cos(yaw+pi/2),0, sin(yaw+pi/2)); 

And an up vector is simply

 point up(0,1,0); 

Then, a screen-aligned billboard of size S at position pos can be defined as

 P1=pos-right*S-up*S; P2=pos+right*S-up*S; P3=pos+right*S+up*S; P4=pos-right*S+up*S; 

The preceding scheme is pictured in Figure 19.3.

Figure 19.3. Computing the up and right vectors for a particle.

graphics/19fig03.gif

This method can be easily extended to a full camera with roll, pitch, and yaw. In this case, computing the vectors yourself is not a good idea, because you can get them from the modelview matrix, as shown in the following code:

 glGetFloatv(GL_MODELVIEW_MATRIX, mat) right.create(mat[0], mat[4], mat[8]); up.create(mat[1], mat[5], mat[9]); 

An even better alternative is to let the API align the billboards using hardware functionality. Most modern video cards can handle billboards internally, so all we pass through the bus are particle coordinates and not much else. The card also takes care of constructing the quads and rendering them to the screen. This provides much better results because we save bus resources and avoid transforms. This functionality is present in both OpenGL and DirectX under the form of Point Sprites. In OpenGL it is part of an extension, and in DirectX it is built into the core API.

Use Blending Modes Appropriately

Great expressive power is available if you learn how to unleash it. Many blending modes are available for you to experiment with and create unique looking systems. The two basic modes are the filtered and the additive blend. Filtered blend is defined by these alpha values:

 SRC_ALPHA, ONE_MINUS_SRC_ALPHA 

It provides the look of a semitransparent particle, like rain. The particle is not self-illuminating, so adding more and more layers will not make the scene look brighter. That can be achieved by additive blending, such as

 SRC_ALPHA, ONE 

So we do not filter the background, but add our components to it. This mode is used for anything involving light, fire, explosions, and so on. Be careful: Alpha values must be kept low, so as not to saturate the final image too soon and retain some color.

Animated Textures

Many particle systems need you to go one step beyond just animating particles using a high-quality simulator. Imagine a fireball, for example. Its dynamics are too fast and complex to render each particle as a static quad. Thus, we need to combine a good simulator with particle-level animation, so each particle has several frames of animation, usually stored in the same texture as tiles. Then, the particle will cycle through its animation frames during its life cycle, making the overall look much more dynamic. If you choose to follow this path, just make sure particles are not in sync in their animation loops, or the effect will be completely destroyed. Different particles must be at different frames within the animation at any given point in time.

Chained/Hierarchical Systems

The dynamics of the real world often combine several stages to create a complex composite illusion. Take the blast created by a rocket firing, for example. Fire emerges from the exhausts, and then huge clouds of smoke and water vapor are generated. Additionally, small particles of ice separate from the main structure, following different trajectories. How can we characterize that? Is it a particle system? Quite clearly not, because there are several types of particles, each governed by different simulators with different rendering engines. But the system can be seen as a whole: a system of particle systems. Thus, we must design our particle systems so we can aggregate and chain them into groups. We need arrays of systems and message-passing policies that allow chaining to take place. For example, it is a good idea to add an "extinguish behavior" to the particles, which defines what actions are carried out whenever we need to respawn a particle. This behavior will be empty by default, but we could decide to create a particle of a different kind than this one extinguishes. Think of a fire element that, upon destruction, triggers a smoke system. This kind of overall architecture is necessary to create richer effects.

Visual Parameters as Functions of Time

All visual parameters are functions of time. Take a look at fire, for example. The core flame is usually blue, and as it moves away, it turns yellow, then white, and slowly vanishes. Realistic particle systems need time-varying parameters. Color and alpha are obvious, but there are others. The spin (speed of rotation), the frame in the animated texture, and so on create a much more convincing effect if time is entered into the equation. If each particle has a slightly different time-response curve, the effects will really improve.



Core Techniques and Algorithms in Game Programming2003
Core Techniques and Algorithms in Game Programming2003
ISBN: N/A
EAN: N/A
Year: 2004
Pages: 261

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net