Working with AudioPaths


Once you've created an AudioPath, there's a lot you can do with it, from starting and stopping Segments to accessing and adjusting various objects in the AudioPath. Let's look at all of these. We can start with the one you probably won't need: downloading instruments and waves directly to the AudioPath.

Downloading and Unloading with AudioPaths

As we discussed in previous chapters, before a Segment can be played, all of its DLS instruments and waves must be downloaded to the synth. As long as there is only one synthesizer, this is a relatively simple proposition; simply download everything directly to the one synth, regardless of what AudioPath it will play on. Indeed, that is what we have been doing so far by downloading to the Performance.

AudioPaths introduce an extra, though optional, level of complexity. Along with the DMOs, buffers, and tools, the AudioPath configuration also defines what synth (or even multiple synths) to use. That's right, even synths are plug-in components in DirectMusic! This makes things a little more interesting because only the AudioPath knows which instruments on which pchannels go to which synths. So, any downloading and unloading of Segments must include the AudioPath to manage the routing of data to the correct synths. To do so, pass the IDirectMusicAudioPath interface to the Segment's Download() and Unload() commands:

    // Download the Segment's instruments and waves    // to the synth(s) managed by the AudioPath.    pSegment->Download(pAudioPath);    // Later, when done, unload the instruments and waves,    // again via the AudioPath.    pSegment->Unload(pAudioPath); 

Note that if there are multiple copies of the same AudioPath, you don't need to download to each one. Downloading to one will get the instruments and waves in place in the synth for use by all. This is a common misconception. Also, it's actually very rare to be using more than one synth. So, in the 99 percent chance where you are only using the standard Microsoft DLS2 synth, the easiest solution is to download to the Performance and not worry about the AudioPaths. In fact, that is exactly what Jones does.

Playing on an AudioPath

You can play Segments directly on an AudioPath. Or, you can make it the default AudioPath for the Performance, in which case all Segments will default to playing on it. To play a Segment directly on the AudioPath, pass it as a parameter to PlaySegmentEx():

    pPerformance->PlaySegmentEx(        pSegment,NULL,NULL,0,0,NULL,NULL,pAudioPath); 

To stop a Segment playing on an AudioPath, you don't need to do anything special. Just pass the Segment or SegmentState to the Performance's StopEx() method, and it will stop it. However, there's a cool trick that you can play if you want to stop everything that is playing on a particular AudioPath. Try passing the IDirectMusicAudioPath interface to StopEx() instead of a Segment or Segment-State (IDirectMusicSegment or IDirectMusicSegmentState), and StopEx() will stop all Segments that are currently playing on just the one AudioPath. This is very useful, especially when playing complex sounds in 3D and you need to simply shut off all sounds from a particular source.

    pPerformance->StopEx(pAudioPath,0,0); 

Likewise, you can take advantage of PlaySegmentEx()'s ability to stop one or more Segments at the very moment the new Segment starts. If you pass the IDirectMusicAudioPath pointer in PlaySegmentEx's pFrom parameter, it will stop all Segments playing on the AudioPath at the very moment it starts the new Segment. For example, you might use this to shut off all engine and tire sounds for a race car when it hits a wall and an explosion Segment is played. In the following example, pCarAudioPath is the AudioPath used for all sounds coming from the car, and pExplosionSegment is the Segment with the explosion sound.

 pPerformance->PlaySegmentEx(     pExplosionSegment,NULL,NULL,0,0,NULL,pCarAudioPath,pCarAudioPath); 

This tells DirectMusic to play the explosion sound on the car's AudioPath while shutting down all Segments currently playing on the same AudioPath.

Embedded AudioPaths

Using DirectMusic Producer, it is possible to embed an AudioPath in a Segment. This is handy because it lets you attach everything necessary to play a Segment to the Segment itself. For example, if the Segment were designed to play with a particular reverb on some of the MIDI channels and a combination of compression and parametric EQ on some other channels, you could create an AudioPath configuration with the required reverb, compression, and EQ settings and place it within the Segment. Then, on playback, you'd get exactly the configuration of effects that was designed with the Segment in mind.

Once an AudioPath has been embedded in a Segment, there are two ways to use it. You can either request the AudioPath configuration directly by calling the Segment's GetAudioPathConfig() method or telling PlaySegmentEx() to automatically use the AudioPath.

Here's how you use the configuration in the Segment:

    // Get the AudioPath configuration from the Segment.    pSegment->GetAudioPathConfig(&pAudioPathConfig);    // Create an AudioPath from the configuration.    pPerformance->CreateAudioPath(pAudioPathConfig,true,&pAudioPath);    // Done with the configuration.    pAudioPathConfig->Release();    // Play the Segment on the AudioPath.    pPerformance->PlaySegmentEx(pSegment,NULL,NULL,0,0,0,0,pAudioPath);    // Release the AudioPath. It will go away when the Segment finishes.    pAudioPath->Release(); 

Or, just tell PlaySegmentEx() to use the embedded AudioPath:

    pPerformance->PlaySegmentEx(pSegment,NULL,NULL,        DMUS_SEGF_USE_AUDIOPATH, // Use the embedded path, if it exists.        0,NULL,NULL); 

At first glance, it would seem like the latter is always the preferable solution. Certainly, it is a lot more convenient. However, it is not as flexible and brings with it a performance cost. Creating and using an AudioPath, especially if it has its own dynamic buffers and DMO effects, takes CPU cycles. So, if you intend to use the AudioPath for the playback of one or more Segments, it quickly becomes smarter to manage it directly.

Accessing Objects in an AudioPath

With all the wonderful things you can place in an AudioPath, it's important that you be able to access them. For example, if you create an AudioPath with a 3D Buffer, you will need to access the 3D interface on a regular basis to update the 3D position as the object moves in space.

So, there's a special method, GetObjectInPath(), that you use to access the objects in an AudioPath both before Segments play on it as well as once it is actively processing playing Segments. Since there are so many different types of objects that can reside in an AudioPath and each has its own interface, GetObjectInPath() approaches this in a generic way, using a series of parameters to identify the instance and position of the object and an IID to request the desired interface.

 HRESULT hr = pPath->GetObjectInPath(     DWORD dwPChannel,       // Pchannel to search.     DWORD dwStage,          // Position in AudioPath.     DWORD dwBuffer,         // Which buffer, if in DSound.     REFGUID guidObject,     // Class ID of object.     DWORD dwIndex,          // Nth item.     REFGUID iidInterface,   // Requested interface IID.     void ** ppObject        // Returned interface. ); 

Gadzooks! That's a lot of parameters! Have no fear, it will all make sense, and you typically don't need most of these. Let's look at them in order:

  • DWORD dwPChannel: This is the Performance channel to search. dwPChannel is necessary because a Tool or DMO could be set to play only on specific channels. If that level of precision is not needed, DMUS_PCHANNEL_ALL will search all channels.

  • DWORD dwStage: The AudioPath is broken down into a series of "stages," each representing a sequential step in the Audio-Path's route. These are represented by a series of hard-coded identifiers (aka "stages"), starting with DMUS_PATH_AUDIO-PATH_GRAPH, which represents the Tool Graph embedded in the AudioPath, all the way to DMUS_PATH_PRIMARY_ BUFFER, which identifies the primary Buffer at the end of DirectSound. Frequently used stages include DMUS_PATH_ AUDIOPATH_TOOL and DMUS_PATH_BUFFER_DMO.

  • DWORD dwBuffer: When accessing a DirectSound Buffer or DMO embedded within a Buffer, providing the stage and pchannel is not enough because there could be more than one Buffer in parallel. Therefore, dwBuffer identifies which Buffer to search, starting with 0 for the first Buffer and iterating up. Otherwise, this should be 0.

  • REFGUID guidObject: For DMOs, Tools, and synth ports, a class ID is needed to identify which type of object to look for. Optionally, GUID_All_Objects will search for objects of all classes.

  • DWORD dwIndex: It is entirely possible that more than one of a particular object type exist at the same stage in the AudioPath. For example, if there are two DMOs of the same type in a Buffer, use dwIndex to identify each. Alternately, when guidObject is set to GUID_All_Objects, dwIndex can be used to iterate through all objects within one particular stage.

  • REFGUID iidInterface: You must provide the IID of the interface that you are requesting. For example, to get the 3D interface on the Buffer, pass IID_IDirectSound3DBuffer.

  • void ** ppObject: This is the address of a variable that receives a pointer to the requested interface. The interface will be AddRef()'d by one, so be sure to remember to release it when done.

Here's an example of using GetObjectInPath() to update the 3D position of a 3D AudioPath:

 void Set3DPosition(IDirectMusicAudioPath *pPath,                    D3DVALUE x, D3DVALUE y, D3DVALUE z) {     IDirectSound3DBuffer *p3DBuffer = NULL;     // Use GetObjectInPath to access the 3D interface.     if (SUCCEEDED(pPath->GetObjectInPath(         DMUS_PCHANNEL_ALL,           // Ignore pchannels.         DMUS_PATH_BUFFER,            // Buffer stage.         0,                           // First buffer.         GUID_All_Objects,            // Ignore object type.         0,                           // Ignore index.         IID_IDirectSound3DBuffer,    // 3D buffer interface.         (void **)&p3DBuffer)))     {         // Okay, we got it. Set the new coordinates.         p3DBuffer->SetPosition(x,y,z,DS3D_IMMEDIATE);         // Now release the interface.         p3DBuffer->Release();     } } 

Setting the AudioPath Volume

The AudioPath interface has a very useful method, SetVolume(), that changes the volume of everything playing on it. SetVolume() takes as parameters the requested volume setting (which is always a negative number because it is really attenuation) and the amount of time to fade or rise to the requested level.

 // Drop the volume to -6db and take 200ms to do so. pAudioPath->SetVolume(-600,200); 

Note

SetVolume() can be very CPU expensive if you aren't careful. It works by sending Volume MIDI messages down to the synth for every pchannel in the AudioPath, and if you specify a duration other than 0 for immediate, it will send a series of these to create a smooth fade. If you have created an AudioPath with many more pchannels then you are actually using, you will pay for it with lots of wasted volume messages, which add up. Create only as many pchannels as you need, and if you don't need a fade (i.e., a sudden change in volume is acceptable, perhaps because nothing is playing), then set the fade duration to 0.

Shutting Down an AudioPath

Running AudioPaths consumes memory and CPU resources, especially if they have DMOs and dynamic buffers. It's always a good idea to get rid of an AudioPath when you are done using it. To do this, simply call the AudioPath's Release() method.

 // Bye bye... pPath->Release(); 

Note

If an AudioPath is currently actively playing Segments, it will not go away when you call Release(). Instead, it will wait until the last of the Segments playing on it finishes.

In some cases, you might want to keep the AudioPath around for a short while between uses, but keep it deactivated so CPU is not wasted. You can do so by deactivating and then reactivating the AudioPath. Call the AudioPath's Activate() method, which takes a Boolean variable to activate or deactivate.

 // Don't need the path for a while... pPath->Activate(false); // Okay, fire it back up, we need it again... pPath->Activate(true); 

It's important to note that the operations of creating, releasing, activating, and deactivating an AudioPath are all glitch free. In other words, you can add and remove AudioPaths to a playing performance without fear of an audio hiccup.




DirectX 9 Audio Exposed(c) Interactive Audio Development
DirectX 9 Audio Exposed: Interactive Audio Development
ISBN: 1556222882
EAN: 2147483647
Year: 2006
Pages: 170

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net