Sound Effects Programming 101


In general, sound effects programming with DirectX Audio is very straightforward. We already touched on it a bit in Chapter 8 when we added 3D to our Hello World application, and the chapters on Segments and AudioPaths covered just about all the basics for loading and playing sounds on AudioPaths. Here is a quick walk through the specific things you must do to get sounds playing effectively with real-time control of their 3D position. We start with the one assumption that you created the Loader and Performance and have initialized the Performance.

Allocating a Sound Effects AudioPath

Although any possible AudioPath configuration can be defined in an AudioPath configuration file, for most sound effects work, three predefined types do the job admirably.

  • DMUS_APATH_DYNAMIC_3D: Use this for sounds that need to be positioned in 3D space. All sounds played on the AudioPath are mixed and then streamed through one DirectSound 3D Buffer for 3D spatialization. The 3D controls of the DirectSound Buffer can be directly manipulated by the application to move the mix in space.

  • DMUS_APATH_DYNAMIC_MONO: Use this for mono sounds that do not need 3D positioning. This mixes everything played on the AudioPath into a Mono DirectSound Buffer. Note that these can still be panned within the stereo spectrum.

  • DMUS_APATH_DYNAMIC_STEREO: Use this for stereo sounds. Again, everything is mixed down into one Buffer — this time in stereo.

To create one of these AudioPaths, call IDirectMusicPerformance:: CreateStandardAudioPath() and pass the type of AudioPath you want. Give it the number of pchannels you will need on the AudioPath. The number of pchannels is really determined by how many pchannels are used in the Segments that will play on the path. For sound effects work, this is usually a small number. For example, if you are just playing wave files, only one pchannel is needed.

Here is the code to create a 3D AudioPath with four pchannels.

 IDirectMusicAudioPath *pPath = NULL; m_pPerformance->CreateStandardAudioPath(     DMUS_APATH_DYNAMIC_3D,  // Create a 3D path     4,                      // Give it four pchannels     true,                   // Activate it for immediate use.     &pPath);                // Returned path. 

Later, when done with the AudioPath, just use Release() to get rid of it:

 pPath->Release(); 

Working with AudioPaths is quite different from programming directly with DirectSound Buffers. Importantly, you can play any number of sounds on the same AudioPath. This means that instead of creating a 3D buffer for each individual sound, you can create one AudioPath and play any number of Segments on it, and each Segment can have one or more sounds of its own that play at once. This paradigm shift is important to remember as you design your sound system. Forgive me if I sound a little redundant with this fact, but it is a good one to hammer home.

Playing Sounds on the AudioPath

First, load the sound into a Segment. The easiest way to do this is to use the Loader's LoadObjectFromFile() method, which takes a file name and returns the loaded object (in this case a Segment):

 IDirectMusicSegment8 *pSegment = NULL; // Now, load the Segment. m_pLoader->LoadObjectFromFile(     CLSID_DirectMusicSegment,   // Class ID of Segment.     IID_IDirectMusicSegment8,   // Segment interface.     pwzFileName,                // File path.     (void **) &pSegment);       // Returned Segment. 

To play the Segment on the AudioPath, pass both the Segment and AudioPath to the Performance's PlaySegmentEx() method:

 IDirectMusicSegmentState8 *pSegState; hr = m_pPerformance->PlaySegmentEx(     pSegment,               // The Segment     NULL,NULL,              // Ignore these     DMUS_SEGF_SECONDARY,    // Play as a secondary Segment     0,                      // No time stamp.     &pSegState,             // Optionally, get a SegState.     NULL,                   // No prior Segment to stop.     pPath);                 // Use AudioPath, if supplied. 

Remember that there is not a limit to how many Segments can play on how many AudioPaths. One Segment can play on multiple AudioPaths, and multiple Segments can play on one AudioPath.

If the Segment needs to be stopped for some reason, there are three ways to do it:

  • Stopping the Segment State: This stops just the one instance of the playing Segment, so all other Segments playing on the AudioPath continue. Use this to terminate one sound on an object without affecting others.

  • Stopping the Segment: This stops all instances of the Segment, no matter which AudioPaths they are playing on. This is the simplest as well as the least useful option.

  • Stopping everything in the AudioPath: This stops all Segments currently playing on the AudioPath. This is very useful for sound effects work.

In all three cases, call the Performance's StopEx() method and pass the Segment, Segment State, or AudioPath as the first parameter. StopEx() figures out which parameter it has received and terminates the appropriate sounds. In these two examples, we first pass the Segment State and then the AudioPath:

 // Stop just the one instance of a playing Segment. m_pPerformance->StopEx(pSegState,NULL,0); // Stop everything on the AudioPath. m_pPerformance->StopEx(pPath,NULL,0); 

Activating and Deactivating the AudioPath

Even when all Segments on an AudioPath have stopped playing, the AudioPath continues to use system resources. This primarily involves the overhead of streaming empty data down to the hardware buffer. When there are many hardware buffers, this can be a substantial cost. It is a good idea to deactivate AudioPaths that are currently not in use. Deactivation does not release the AudioPath resources. It keeps them around so the AudioPath is ready to go the next time you need it, but it does disconnect the CPU and I/O overhead. Given that 3D buffers can be somewhat expensive on some hardware configurations, deactivation of the AudioPath is a good feature to use.

One method on IDirectMusicAudioPath, Activate(), handles both activation and deactivation. Here is example code for first deactivating and then activating an AudioPath:

 // Deactivate the AudioPath pPath->Activate(false); // Activate the AudioPath pPath->Activate(true); 

Controlling the 3D Parameters

Once sounds are playing on an AudioPath, it really gets fun because now you can move them around in space by manipulating the 3D interface of the DirectSound Buffer at the end of the AudioPath. To do this, directly control the 3D Buffer itself via the IDirectSound3DBuffer8 interface, which can be retrieved via a call to the AudioPath's GetObjectInPath() method.

The IDirectSound3DBuffer8 interface has many powerful options that you can use to manage the sound image. Of obvious interest are the commands to set the position and velocity. You should be acquainted with all of the features of the 3D Buffer, however. These include:

  • Position: Position is the single most important feature. Without it, you cannot possibly claim to have 3D sound in your application. Get and set the position of the object in 3D space.

  • Velocity: Every object can have a velocity, which is used to calculate Doppler shift. DirectSound does not calculate velocity for you, which it theoretically could by simply measuring the change in distance over time because that would cause a delay in any velocity change. Therefore, you must calculate it directly and set it for each Buffer if you want to hear the Doppler effect.

  • Max distance and min distance: These set the range of distance from the listener at which sounds are progressively attenuated. Sounds closer than min distance cease to increase in volume as they get closer. Sounds farther than max distance cease to get quieter as they get farther away.

  • Cone angle, orientation, and outside volume: A very sophisticated feature is the ability to define how an object projects its sound. You can specify a cone of sound that emits from an object. Cone orientation sets the direction in 3D space. Angle sets the width of an inner and outer cone. The inner cone wraps the space that plays at full volume. The outer cone marks the boundary where sound plays at outside volume. The area between the inner and outer cones gradually attenuates from full volume to outside volume.

  • Mode: You can also specify whether the sound position is relative to the listener (more on that in a second) or absolute space or centered inside the listener's head. I'm not particularly fond of voices inside my head, so I avoid that last choice.

The DirectX SDK has excellent documentation on working with the DirectSound 3D Buffer parameters, so I am not going to spend a lot of time on this subject beyond the important position and velocity. Here is some example code that sets the 3D position of an AudioPath (more on setting velocity in our application example later in this chapter):

 // D3D vector needs to be initialized. D3DVECTOR VPosition; // We'll be using an IDirectSound3DBuffer interface. IDirectSound3DBuffer *p3DBuffer = NULL; // Use GetObjectInPath to retrieve the Buffer. if (SUCCEEDED(m_pAudioPath->GetObjectInPath(     0,     DMUS_PATH_BUFFER,     0,     GUID_All_Objects,0,     IID_IDirectSound3DBuffer,     (void **)&p3DBuffer))) {     // Okay, we got the 3D Buffer. Position it.     p3DBuffer->SetPosition(VPosition.x,VPosition.y,VPosition.z,DS3D_IMMEDIATE);     // Then let go of it.     p3DBuffer->Release(); } 

Likewise, you can control any of the other 3D parameters on the Buffer by using the IDirectSound3DBuffer8 interface. Remember that you can adjust any of the regular DirectSound Buffer parameters, like frequency, pan, and volume, in a similar way with the IDirectSoundBuffer8 interface.

Controlling the Listener

In order for the 3D positioning of the sound objects to make any sense, they need to be relative to an imaginary person whose view-point corresponds with the picture we see on the monitor. In other words, the ears of the viewer should be positioned and oriented in much the same way as the eyes. This is called the "listener," as opposed to the visual "viewer." Once the listener information is provided, DirectSound is able to correctly calculate the placement of each 3D sound relative to the listener and render it appropriately.

The listener provides a series of parameters that you should become acquainted with as you develop your 3D sound chops:

  • Position: This determines the current position of the listener in 3D space.

  • Orientation: This determines the direction the listener is facing. Orientation is described by two 3D vectors. One sets the direction faced, and the second sets the rotation around that vector. Orientation addresses the question of which direction the listener is facing and whether the listener is upside down, looking sideways, or right side up.

  • Velocity: This determines the speed at which the listener is moving through space. This is combined with the velocities of the 3D objects to determine their Doppler shift.

  • Distance Factor: By default, positions are measured in meters. The Distance Factor is simply a number multiplied against all coordinates to translate into a larger or smaller number system. For example, you might want to work in feet instead of meters. Or, your game takes place in outer space and you are measuring Aus instead of meters. Hmmm, for calculating Doppler, how long does it take a sound to travel one light year in outer space? (I sense that this is a trick question)

  • Doppler Factor: By default, Doppler is calculated technically correctly based on the velocities provided for each object and the listener. For effect, it might be desirable to exaggerate the Doppler. Doppler Factor multiplies the Doppler effect.

  • Rolloff Factor: The Rolloff Factor controls the rate at which sounds attenuate depending on their distance from the listener. Use this to ignore the sound, roll it off, exaggerate it, or give it the same effect as in the real world.

The listener provides methods for getting and setting each of these parameters independently as well as two methods, SetAllParameters() and GetAllParameters(), for accessing them all at once via a structure, DS3DLISTENER.

 typedef struct {   DWORD      dwSize;   D3DVECTOR  vPosition;   D3DVECTOR  vVelocity;   D3DVECTOR  vOrientFront;   D3DVECTOR  vOrientTop;   D3DVALUE   flDistanceFactor;   D3DVALUE   flRolloffFactor;   D3DVALUE   flDopplerFactor; } DS3DLISTENER, *LPDS3DLISTENER; 

To access the Listener, use the AudioPath's GetObjectInPath() method. Since the Listener belongs to the primary Buffer, find it at the DMUS_PATH_PRIMARY_BUFFER stage. The following example gets the Listener via an AudioPath and, for grins, adjusts the Doppler Factor.

 IDirectSound3DListener8 *pListener = NULL; pPath->GetObjectInPath(0,        DMUS_PATH_PRIMARY_BUFFER,    // Retrieve from primary Buffer.        0,GUID_All_Objects,0,        // Ignore object type.        IID_IDirectSound3DListener8, // Request the listener interface.        (void **)&pListener); if (pListener) {     DS3DLISTENER Data;     Data.dwSize = sizeof(Data);     // Read all of the listener parameters.     pListener->GetAllParameters(&Data);     // Now change something for the sake of this example.     Data.flDopplerFactor = 10;       // Really exagerate the Doppler.     pListener->SetAllParameters(&Data,DS3D_IMMEDIATE);     pListener->Release(); } 




DirectX 9 Audio Exposed(c) Interactive Audio Development
DirectX 9 Audio Exposed: Interactive Audio Development
ISBN: 1556222882
EAN: 2147483647
Year: 2006
Pages: 170

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net