Adding Voice Chat to Your Sessions

Sitting around and typing chat messages while playing your favorite online games seems to be a common occurrence. Even though these players can type quite quickly, and use a form of shorthand many times, the fact remains that in many types of games, the act of stopping to type a chat message leaves you vulnerable, even if only briefly. What you need is a hands-free way to communicate that allows you to keep playing the game. As humans, we've relied on voice communication for thousands of years, and unless you're using sign language, it's completely hands-free.

Adding voice chat into your application can be quite simple, although with a little extra work you can have full control over everything. To handle this simple case first, let's take the peer-to-peer session application we wrote in Chapter 18, "Simple Peer-to-Peer Networking," and add voice to it. No fancy options, just allow everyone in the session to speak with everyone else.

Before we begin, we will need to add a few things to the project file. First, the voice communication used in DirectPlay goes through DirectSound, so we will need to include a reference to the DirectSound assembly. Since there is are Server and Client classes in our voice namespace as well, we should also add the following using clauses:

 using Voice = Microsoft.DirectX.DirectPlay.Voice; using Microsoft.DirectX.DirectSound; 

In order to actually use voice during a session, we will need both a client and a server object. The server will be responsible for starting the voice session, while the client will naturally connect to the server's session. In a peer-to-peer session, where the host is also a member of the session, that peer will need to be both the server and a client of the voice session. Let's add our variable declarations:

 private Voice.Client voiceClient = null; private Voice.Server voiceServer = null; 

Naturally, without a server, the client has nothing to connect to, so we should go about creating a server first. Find the section of code where we first host our peer-to-peer session. Directly after the EnableSendDataButton call in the InitializeDirectPlay method, add the following code:

 // Create our voice server first voiceServer = new Voice.Server(connection); // Create a session description Voice.SessionDescription session = new Voice.SessionDescription(); session.SessionType = Voice.SessionType.Peer; session.BufferQuality = Voice.BufferQuality.Default; session.GuidCompressionType = Voice.CompressionGuid.Default; session.BufferAggressiveness = Voice.BufferAggressiveness.Default; // Finally start the session voiceServer.StartSession(session); 

As you can see, when we create our voice object, we also need to pass in the DirectPlay object that will act as the transport for the voice communication. This will allow the voice to be transmitted along with the other data in your session. Before we can actually start a session, though, we need to have a basic way to describe the session.

Since we are in a peer-to-peer networking session, we choose the peer session type for our voice communication. This sends all the voice data directly between all of the players. Other types of sessions are

  • Mixing In this mode, all voice communication is sent to the server. The server then mixes the combined voice data and forwards it on to the clients. This dramatically reduces CPU time and bandwidth on the client, but subsequently raises each for the server.

  • Forwarding All voice communication will be routed through the session host in this mode. It will lower bandwidth for each client, but drastically increase the bandwidth for the host. Unless the host has a large amount of bandwidth, this option won't help you much.

  • Echo Voice communication will be echoed back to the speaker.

Another option we can set in the session description is the codec we want to use for voice communication. We've chosen the default codec here, but there are actually numerous codecs you can use. This code fragment will print out each of them in the output window:

 foreach(Voice.CompressionInformation ci in voiceServer.CompressionTypes) {     Console.WriteLine(ci.Description); } 

The compression information structure contains enough data to determine which code you wish to use. You may also use any of the ones listed in the CompressionGuid class.

The remaining options we set for the session are the quality and aggressiveness of the buffer we are using for the voice communication. We simply want to use the default options for these.

With the server created and a session started, it will be important to ensure that we actually stop the session and dispose of our object when we are finished. Add the following code to your dispose override (before calling dispose on your peer object):

 if (voiceServer != null) {     voiceServer.StopSession();     voiceServer.Dispose();     voiceServer = null; } 

Nothing all that fancy here; we simply stop the session and dispose of our object. However, we only have half of our voice communication implemented. Sure, we have a server running, but what about the clients? Add the function found in Listing 20.2 into your application.

Listing 20.2 Connect to a Voice Session
 private void ConnectVoice() {     // Now create a client to connect     voiceClient = new Voice.Client(connection);     // Fill in description object for device configuration     Voice.SoundDeviceConfig soundConfig = new Voice.SoundDeviceConfig();     soundConfig.Flags = Voice.SoundConfigFlags.AutoSelect;     soundConfig.GuidPlaybackDevice = DSoundHelper.DefaultPlaybackDevice;     soundConfig.GuidCaptureDevice  = DSoundHelper.DefaultCaptureDevice;     soundConfig.Window = this;     // Fill in description object for client configuration     Voice.ClientConfig clientConfig = new Voice.ClientConfig();     clientConfig.Flags = Voice.ClientConfigFlags.AutoVoiceActivated |         Voice.ClientConfigFlags.AutoRecordVolume;     clientConfig.RecordVolume = (int) Voice.RecordVolume.Last;     clientConfig.PlaybackVolume = (int) Voice.PlaybackVolume.Default;     clientConfig.Threshold = Voice.Threshold.Unused;     clientConfig.BufferQuality = Voice.BufferQuality.Default;     clientConfig.BufferAggressiveness = Voice.BufferAggressiveness.Default;     // Connect to the voice session     voiceClient.Connect(soundConfig, clientConfig, Voice.VoiceFlags.Sync);     voiceClient.TransmitTargets = new int[] {             (int)Voice.PlayerId.AllPlayers }; } 

This looks a lot more intimidating than it is. First, we create our voice client object, once again passing in our DirectPlay connection object to be used as the transport. Before we can actually connect to our session, though, we need to set up the configuration for both our sound card as well as the client.

The SoundDeviceConfig structure is used to tell DirectPlay Voice information about the sound devices you wish to use for the voice communication. In our case, we want to automatically select the microphone line, and use the default playback and capture devices. If you remember back to our DirectSound chapter, the cooperative level was set based on the window, so we pass that in as well.

Next, we want to establish the runtime parameters for the client, and that is done with the ClientConfig structure. We set our flags so that the record volume is automatically adjusted to give the best sound quality, as well as to signify that we want to automatically start sending voice communication when sound is actually picked up from the microphone. When you use the AutoVoiceActivated flag, you must also set the threshold value to unused, as we did previously. If the voice communication is "manually" activated, the threshold value must be set to the minimum level needed to start the voice communication.

Finally, we set our playback and recording volumes to the defaults, along with the buffer quality and aggressiveness, much like we did with the server. We then connect to the hosting session. We use the synchronous parameter here because for simplicity I didn't want to create the event handlers yet. Once we've been connected to the session, we set the transmit targets to everyone, signifying that we want to talk to the world.

With the code to create a client and set the transmit targets, we are left with two remaining items. First, we need to deal with cleanup. Before your voice server object is disposed, add the following:

 if (voiceClient != null) {     voiceClient.Disconnect(Voice.VoiceFlags.Sync);     voiceClient.Dispose();     voiceClient = null; } 

Simple stuff; we disconnect from the session (using the synchronous flag once more), and then dispose our object. The last remaining item is to actually call our ConnectVoice method from somewhere. Since both the host and the clients of the DirectPlay session will need to create this voice client, you will put the call in two places. First, place the call after the StartSession method on your server to allow the host to create its voice client. Next, in the OnConnectComplete event handler, if the connection was successful, put the call in that block as well:

 if (e.Message.ResultCode == ResultCode.Success) {     this.BeginInvoke(new AddTextCallback(AddText),         new object[] { "Connect Success." });     connected = true;     this.BeginInvoke(new EnableCallback(EnableSendDataButton),         new object[] { true } );     ConnectVoice(); } 

CHECKING AUDIO SETUP

It's entirely possible that if you haven't ever run the voice wizard, the Connect call will fail with a RunSetupException exception. If this is the case, you can easily catch this exception and run setup. You can run setup quite easily with the following code fragment:

 Voice.Test t = new Voice.Test(); t.CheckAudioSetup(); 

After the setup has been run, you can try your connection once more. This setup should only need to be run once.



Managed DirectX 9 Graphics and Game Programming, Kick Start
Managed DirectX 9 Kick Start: Graphics and Game Programming
ISBN: B003D7JUW6
EAN: N/A
Year: 2002
Pages: 180
Authors: Tom Miller

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net