Working with Multimedia

   

The Windows multimedia system provides a standard means by which multimedia devices can be controlled. This system simply delegates the communication between an application and a particular device driver. The Media Control Interface (MCI) adds even more flexibility by providing a common means by which applications can communicate with all supported audio and video devices. From the developer's point of view, the specifics of the device are irrelevant; oftentimes, even the type of device is of no concern.

In this section, we'll first discuss the use of the MCI for creating general-purpose audio and video applications. In Chapter 14, we touched on the MCI capabilities provided by the Win32 API. Now, we will dig a little deeper. As you will soon discover, the MCI is perhaps the easiest multimedia interface to work with. Next , we'll examine the use of the waveform-audio interface and how it can be used for increased audio-based functionality. Finally, we'll tackle the issue of audio streams, and examine how to read and write waveform-audio files.

The Media Control Interface (MCI)

In the same way that the GDI offers a generic means by which you can communicate with graphics-based devices, the Windows MCI enables you to program multimedia devices in a device-independent manner. Before the advent of the MCI, developers were required to write code that targeted specific devices. Often this process involved using procedures specific to a particular device driver. It's not hard to imagine the hassle that such a scheme would present, where the consumer base would be limited to a specific range of legacy device types. For example, many of the early DOS-based games required that the sound card be compatible with the original Sound Blaster standard. Otherwise , the game would not generate any audible sound or music from the sound card.

Using Command Messages and Strings

Applications communicate with the MCI via a set of predefined messages and string constants. In much the same way that window messages are used with the user interface services, the MCI provides a set of command messages and strings that can be used to manipulate MCI devices. Many of these messages offer corresponding command strings that can be used for a more intuitive (and readable) approach. Here, we will limit our discussion to the MCI command messages; the message constants are defined in the mmsystem.h header file available with C++Builder.

Similar to the SendMessage() API function that is used to send messages to Windows, the mciSendCommand() function is used to send command messages to MCI devices.

 MCIERROR mciSendCommand(      MCIDEVICEID IDDevice,      UINT uMsg,      DWORD fdwCommand,      DWORD dwParam     ); 

When using the mciSendCommand() function, oftentimes a device identifier is specified as the IDDevice parameter. This serves the same purpose as the hWnd parameter of the SendMessage() function. Namely, the function needs to know to which MCI device to send the message. A device identifier is returned when a device is opened via the MCI_OPEN message.

Decoding Error Constants

The mciSendCommand() function returns a 32-bit value indicating the success or failure of the operation. If successful, this value is set to MMSYSERR_NOERROR , defined in the mmsystem.h header file as identically zero. If an error does occur, the return value is set to one of the predefined error constants. Because these values have little meaning to the user (or the developer), the MCI presents the mciGetErrorString() function, which can be used to translate these error codes into meaningful messages (compared with FormatMessage ). Listing 15.20 demonstrates the use of this function.

Listing 15.20 Decoding MCI- Related Errors Via the mciGetErrorString() Function
 bool mciCheck(DWORD AErrorNum, bool AReport = true)  {      if (AErrorNum == MMSYSERR_NOERROR) return true;      if (AReport)      {          char buffer[MAXERRORLENGTH];          mciGetErrorString(AErrorNum, buffer, MAXERRORLENGTH);          MessageBox(NULL, buffer,  "MCI Error", MB_OK);      }      return false;  }  
Working with MCI Devices

The first step to working with an MCI device is to open or initialize the device; as previously stated, you accomplish this task via the MCI_OPEN message. Because you are interested in retrieving a device identifier, you send this message with a NULL IDDevice parameter. If successful, the identifier of the opened device is returned in the wDeviceID data member of the corresponding MCI_OPEN_PARMS structure; this is the MCI_OPEN_PARMS that was specified as the dwParam argument.

 typedef struct tagMCI_OPEN_PARMS {      DWORD     dwCallback;      MCIDEVICEID wDeviceID;      LPCSTR      lpstrDeviceType;      LPCSTR      lpstrElementName;      LPCSTR      lpstrAlias;  } MCI_OPEN_PARMS, *PMCI_OPEN_PARMS, *LPMCI_OPEN_PARMS; 

Typically, the lpstrDeviceType data member is set to NULL , and the lpstrElementName data member is assigned a filename. This is the most robust approach because it enables the MCI to perform automatic type selection. That is, the appropriate device will be selected according to the type of file specified. In cases where the lpstrDeviceType data member is explicitly specified, it is oftentimes assigned a string value corresponding to the type of device requested . For example, to open the CD audio device, you can specify it as cdaudio . Other string identifiers include avivideo , dat , digitalvideo , mmmovie , other , overlay , scanner , sequencer , vcr , videodisc , and waveaudio . However, it should be stressed that unless support for a specific device is intended, it is best to let the MCI perform automatic type selection. This is especially important when a new technology is presented that has no predefined type identifier (for example, the MP3 format). Listing 15.21 demonstrates the use of the MCI_OPEN message via a simple wrapper function.

Listing 15.21 Using the MCI_OPEN Command Message
 bool mciOpen(MCIDEVICEID& ADevID, const char* AFileName,      const char* ADevType = NULL)  {      MCI_OPEN_PARMS mop;      memset(&mop, 0, sizeof(MCI_OPEN_PARMS));      mop.lpstrElementName = const_cast<char*>(AFileName);      mop.lpstrDeviceType = const_cast<char*>(ADevType);      DWORD flags = 0;      if (AFileName) flags = flags  MCI_OPEN_ELEMENT;      if (ADevType) flags = flags  MCI_OPEN_TYPE;      if (mciCheck(mciSendCommand(NULL, MCI_OPEN, flags,                                  reinterpret_cast<DWORD>(&mop))))      {          ADevID = mop.wDeviceID;          return true;      }      return false;  }  

After the MCI device is open and an identifier is retrieved, your next task is to set the time format of the device. This aspect of the MCI is rather specific to the device type because certain types of devices can support only certain time formats. For example, specifying a track number is valid for CD audio devices, but it is clearly invalid for wave-form audio devices.

You can set the time format for a device via the MCI_SET command message. Whenever applicable , it is best to use the MCI_FORMAT_MILLISECONDS format, which is supported by all devices. An example wrapper function that uses the MCI_SET message is provided in Listing 15.22.

Listing 15.22 Using the MCI_SET Command Message
 bool mciSetTimeFormat(MCIDEVICEID ADeviceID, DWORD ATimeFormat)  {      MCI_SET_PARMS msp;      memset(&msp, 0, sizeof(MCI_SET_PARMS));      msp.dwTimeFormat = ATimeFormat;      return mciCheck(mciSendCommand(ADeviceID, MCI_SET,                      MCI_SET_TIME_FORMAT, reinterpret_cast<DWORD>(&msp)));  }  

After the time format is set correctly, you're free to work with the device in much the same way as you would its physical counterpart . For example, to play the device, you use the MCI_PLAY message. To rewind or fast-forward the device, you use the MCI_SEEK message. To pause the device, you use the MCI_PAUSE message. Similarly, the MCI_STOP message is used to stop the device, and the MCI_CLOSE message closes the device.

NOTE

For a complete list of messages, refer to http://msdn.microsoft.com.


The wrapper functions presented in Listing 15.23 demonstrate the use of these messages.

Listing 15.23 Use of the MCI_PLAY , MCI_SEEK , MCI_PAUSE , MCI_STOP , and MCI_CLOSE Command Messages
 bool mciPlay(MCIDEVICEID ADeviceID, DWORD AStart, DWORD AStop)  {      MCI_PLAY_PARMS mpp;      memset(&mpp, 0, sizeof(MCI_PLAY_PARMS));      mpp.dwFrom = AStart;      mpp.dwTo = AStop;      DWORD flags = 0;      if (static_cast<int>(AStart) >= 0 && static_cast<int>(AStop) >= 0)          flags = MCI_FROM  MCI_TO;      return mciCheck(mciSendCommand(ADeviceID, MCI_PLAY  flags,                      NULL, reinterpret_cast<DWORD>(&mpp)));  }  bool mciSeek(MCIDEVICEID ADeviceID, DWORD APos)  {      MCI_SEEK_PARMS msp;      memset(&msp, 0, sizeof(MCI_SEEK_PARMS));      msp.dwTo = APos;      return mciCheck(mciSendCommand(ADeviceID, MCI_SEEK, MCI_TO,                      reinterpret_cast<DWORD>(&msp)));  }  bool mciPause(MCIDEVICEID ADeviceID)  {      return mciCheck(mciSendCommand(ADeviceID, MCI_PAUSE, 0, 0));  }  bool mciStop(MCIDEVICEID ADeviceID)  {      return mciCheck(mciSendCommand(ADeviceID, MCI_STOP, 0, 0));  }  void mciClose(MCIDEVICEID ADeviceID)  {      mciCheck(mciSendCommand(ADeviceID, MCI_CLOSE, 0, NULL));  } 

Illustrated in Figure 15.5, is a sample project included on the companion CD-ROM that demonstrates the use of these MCI messages. See the Proj_mp3Demo.bpr project in the MP3Demo folder, and specifically the MCIManip.cpp source file.

Figure 15.5. MP3 player project.

graphics/15fig05.gif

Retrieving the Status of a Device

Often it is necessary to provide feedback to the user about the status of a device. For example, if you want to create a CD player, you'll most likely want to inform the user of the current track, the track length, and the current position within the track. More generally , you need to indicate the operating mode of the device (playing, paused , stopped , and so on) to inform the user as to what functionality is available. This information is retrieved via the MCI_STATUS message, as demonstrated in Listing 15.24.

Listing 15.24 Use of the MCI_STATUS Command Message
 bool mciStatus(MCIDEVICEID ADeviceID, DWORD AQueryGroup, DWORD AQueryItem,      DWORD AQueryTrack, DWORD& AResult)  {      MCI_STATUS_PARMS msp;      memset(&msp, 0, sizeof(MCI_STATUS_PARMS));      msp.dwItem = AQueryItem;      msp.dwTrack = AQueryTrack;      if (mciCheck(mciSendCommand(ADeviceID, MCI_STATUS, AQueryGroup,                   reinterpret_cast<DWORD>(&msp))))      {          AResult = msp.dwReturn;          return true;      }      return false;  } 

The project Proj_CDDemo.bpr in the CDDemo folder on the CD-ROM that accompanies this book is a sample CD player that demonstrates all the techniques shown in these sections. This is illustrated in Figure 15.6.

Figure 15.6. CD demo program.

graphics/15fig06.gif

A wide variety of constants can be specified as the AQueryGroup parameter, each with a corresponding set of constants that can be assigned to the AQueryItem and AQueryTrack arguments. For example, to retrieve the total number of tracks present on the media of a CD audio device, you specify MCI_STATUS_ITEM as the AQueryGroup parameter and MCI_STATUS_NUMBER_OF_TRACKS as the AQueryItem parameter. To determine the current track number, you set the AQueryGroup parameter to MCI_STATUS_ITEM and the AQueryItem parameter to MCI_STATUS_CURRENT_TRACK . Also, note that when retrieving length, position, track, or frame information, the format of the returned data depends on the device's current time format. For a complete listing of status flags, refer to http://msdn.microsoft.com.

Polling a Device and MCI Notifications

Although you now have a means of retrieving information about a device, you still need to know when to perform this interrogation . Unfortunately, the MCI presents a limited notification scheme in which only two notification messages, MM_MCINOTIFY and MM_MCISIGNAL , are defined. The latter is useful only for digital video devices. Although the former message sounds promising , it is posted only after a command operation has completed. For example, if you use the mciPlay() wrapper function (of Listing 15.23) to begin the playback of a waveform audio file, the MM_MCINOTIFY message will be posted only when the file has finished playing or playback has otherwise been manipulated. Specifically, this message is posted to the window whose handle is specified via the dwCallback data member of the structure specified as the dwParam argument of the mciSendCommand() function. As such, you need to modify the wrapper functions to accept an hWnd parameter. The sample project Proj_mp3Demo.bpr , in the MP3Demo folder on the CD-ROM that accompanies this book, demonstrates handling the MM_MCINOTIFY message.

In most cases, receipt of the MM_MCINOTIFY message is a sufficient indication of when to determine the status of the operating mode. For example, you can provide a handler for the MM_MCINOTIFY message in which you update the enabled state of your play, pause, and stop buttons . Yet, when retrieving information about a frequently updated attribute such as current position, the MM_MCINOTIFY message is not suitable. Instead, you need to poll the device at a regular interval. This task is typically performed in response to timer messages. In some cases, it is sufficient to use system timer messages; in others it is recommended to use the multimedia timer services. See Chapter 14 or visit http://msdn.microsoft.com for more information on multimedia timer services.

Concluding Remarks About the MCI

Just what types of files does the MCI support? This depends on the audio/video codecs that are installed on the target platform. Many of these codecs are installed when a newer version of Microsoft Media Player is installed. A good rule of thumb is that if Media Player can support a specific file format, so can the MCI. In fact, Media Player itself relies heavily on the MCI.

Included on the companion CD-ROM are two MCI-related demonstration projects: Proj_MP3Demo.bpr in the MP3Demo folder and Proj_VideoDemo.bpr in the VideoDemo folder. The former is a simple MP3 audio player, illustrated previously in Figure 15.5, which can actually support waveform audio (RIFF-based) files as well. The latter, illustrated in Figure 15.7, demonstrates the use of the MCI for displaying video files (AVI, MPEG). Again, the actual supported file formats of both of these demonstration projects are limited by the currently installed codecs. Refer to the comments included at the beginning of the source code for more information.

Figure 15.7. Video player project.

graphics/15fig07.jpg

Although the MCI is perhaps the easiest of all multimedia interfaces to work with, it is quite limited in its functionality. For example, when playing a media file through the MCI, you are never given access to the file's associated data stream. This is especially detrimental if your application is to perform any type of signal processing or format conversion. In this case, you need to go beyond the MCI and work with other multimedia interfaces. For extended audio functionality, this is typically accomplished via the Waveform Audio Interface.

The Waveform Audio Interface

The Windows multimedia service provides the Waveform Audio Interface (waveform API) to allow applications to control the input and output of waveform audio. This interface gives an application direct access to the sound buffer, so in cases where other audio formats must be supported, a simple conversion is all that is necessary. For example, many of the commercial applications that provide support for the MP3 format do so through the waveform API. Moreover, in situations where signal processing is due, direct access to the sound buffer is crucial. For example, if you're interested in creating an audio player with graphic equalization capabilities, you'll need to process the sound buffer before sending it to the output device.

Recall that the lpstrElementName data member of the MCI_OPEN_PARMS structure is typically assigned the name of a media file that is to be played . In this case, the MCI automatically handles the task of opening and loading the file. However, the waveform API does not present such a structure, and thus you're forced to use other measures for file I/O. For example, one potential solution is to open the file manually using the TFileStream VCL class. In this case, you'd need to be sufficiently versed with the waveform audio file format specification (RIFF). An alternative approach is to use the multimedia file I/O services, which is indeed completely valid for many situations. However, because these services are so generalized, working with waveform audio files proves nearly as difficult as the manual solution. Fortunately, Windows provides the AVIFile services, a set of functions and macros specifically designed for use with waveform audio and AVI files. As such, let's now digress from the waveform API and examine the AVIFile services.

Opening and Closing Waveform Audio Files

A waveform audio file is a type of RIFF (Resource Interchange File Format) file that contains time-based audio content. In fact, that's really all you need to know. As mentioned, you do not need to concern yourself with the specifics of the file format itself; instead, you can use the AVIFile functions and macros. These functions and macros, presented in the VFW.H header file and the AVIFIL32.DLL dynamic link library, provide a convenient means of working with waveform audio files and streams.

Before you can use the AVIFile services, you need to initialize the avifil32.dll library via the AVIFileInit() function. Similarly, when you're finished with the library, you release it via the AVIFileExit() function.

The AVIFile functions rely on OLE for handling file and stream-based operations, so you need to provide a means of error checking. For those functions that return the standard STDAPI type, you can use the SUCCEEDED macro as in the following wrapper function:

 bool wavCheck(HRESULT AErrorCode)  {      return SUCCEEDED(AerrorCode);  } 

Let's begin our examination of the AVIFile services by performing the simplest of tasks , opening a waveform audio file. This is accomplished via the AVIFileOpen() function:

 bool wavOpenFile(PAVIFILE& ApFile, const char* AFileName,      unsigned int AMode)  {      return wavCheck(AVIFileOpen(&ApFile, AFileName, AMode, NULL));  } 

The AMode parameter specifies the access mode and can be assigned the same access-related constants that are used with the OpenFile() API function ( OF_READ , for example). The ApFile argument receives a pointer to an AVIFILE structure that simply holds the address of the filehandler interface. Because this filehandler interface is released only when its reference count drops to zero, it is important that you decrement its reference count when the interface is no longer needed. This is accomplished via the AVIFileClose() function:

 void wavCloseFile(PAVIFILE& ApFile)  {      AVIFileClose(ApFile);      ApFile = NULL;  } 
Working with Audio Streams

Although opening and closing a waveform audio file is rather trivial, you'll need to work with the stream handler interface to obtain any useful information. This task proves slightly more complicated. Recall that the PAVIFILE type holds a pointer to the filehandler interface. Similarly, a pointer to the stream handler interface is stored in a variable of type PAVISTREAM . You can retrieve a pointer to this latter interface via the AVIFileGetStream() function. You release the interface via the AVIStreamRelease() function. The wrapper functions presented in Listing 15.25 demonstrate the use of these AVIFile functions.

Listing 15.25 Using the AVIFileGetStream and AVIStreamRelease Functions
 bool wavOpenStream(PAVISTREAM& ApStream, PAVIFILE ApFile)  {      return wavCheck(AVIFileGetStream(ApFile, &ApStream, streamtypeAUDIO, 0));  }  void wavCloseStream(PAVISTREAM& ApStream)  {      AVIStreamRelease(ApStream);      ApStream = NULL;  } 

Working with the stream handler interface is not unlike working with the TMemoryStream class or one of the basic_streambuf descendant classes. However, you have at your disposal several functions specifically designed for use with waveform audio and AVI files. For example, the AVIStreamInfo() function can be used to retrieve information about the content of the stream. This function fills an AVISTREAMINFO structure with information specific to its media content:

 bool wavGetStreamInfo(PAVISTREAM ApStream, AVISTREAMINFO& AStreamInfo)  {      return wavCheck(AVIStreamInfo(ApStream, &AStreamInfo,                      sizeof(AVISTREAMINFO)));  } 

When working with an audio stream, you'll need to know the format of the data itself. For waveform audio files, this information is conveyed via the data members of a WAVEFORMATEX structure. This structure is simply used to describe how audio samples are stored in the corresponding waveform audio data. As such, a particularly useful function when working with waveform audio streams is the AVIStreamReadFormat() function. It is the role of this function to fill the data members of the WAVEFORMATEX structure based on information in the stream. The AVIStreamFormatSize() macro complements this function by reporting the size of the contained structure. Listing 15.26 demonstrates the use of the AVIStreamReadFormat() function and the AVIStreamFormatSize() macro.

Listing 15.26 Using the AVIStreamReadFormat() Function and the AVIStreamFormatSize() Macro
 long wavCalcFormatStructSize(PAVISTREAM ApStream)  {      long required_bytes = 0;      AVIStreamFormatSize(ApStream, 0, &required_bytes);      return required_bytes;  }  bool wavReadFormatStruct(PAVISTREAM ApStream, WAVEFORMATEX& ApFormatStruct)  {      memset(&ApFormatStruct, 0, sizeof(WAVEFORMATEX));      long size = wavCalcFormatStructSize(ApStream);      return wavCheck(          AVIStreamReadFormat(ApStream, 0, &ApFormatStruct,  &size)          );  } 

Like the TMemoryStream::Read() or the basic_ifstream::read() member function, the AVIFile services provide a means by which an application can read media content from a stream. This is the audio data buffer that you're interested in and, as you will see later, access to this buffer is essential to producing audio output via the waveform API. The AVIStreamRead() function is used to read media content from a stream into an application-defined buffer. Similarly, the AVIStreamWrite() function is used to write data from a buffer into a stream. Listing 15.27 demonstrates the use of these functions.

Listing 15.27 Reading and Writing Audio Data to and from a Stream
 long wavCalcBufferSize(PAVISTREAM ApStream)  {      long required_bytes = 0;      AVISTREAMINFO StreamInfo;      if (wavGetStreamInfo(ApStream, StreamInfo))      {          required_bytes = StreamInfo.dwLength * StreamInfo.dwScale;      }      return required_bytes;  }  long wavReadStream(PAVISTREAM ApStream, long AStart, long ANumBytes,      char* ABuffer)  {      long bytes_read = 0;      AVISTREAMINFO StreamInfo;      if (wavGetStreamInfo(ApStream,  StreamInfo))      {          long num_samples = static_cast<float>(ANumBytes) /                             static_cast<float>(StreamInfo.dwScale);          AVIStreamRead(ApStream, AStart, num_samples, ABuffer, ANumBytes,                        &bytes_read, NULL);      }      return bytes_read;  }  long wavWriteStream(PAVISTREAM ApStream, long AStart, long ANumBytes,      char* ABuffer)  {      long bytes_written = 0;      AVISTREAMINFO StreamInfo;      if (wavGetStreamInfo(ApStream, StreamInfo))      {          long num_samples = static_cast<float>(ANumBytes) /                             static_cast<float>(StreamInfo.dwScale);          AVIStreamWrite(ApStream, AStart, num_samples, ABuffer, ANumBytes,                         AVIIF_KEYFRAME, NULL, &bytes_written);      }      return bytes_written;  }  

The wavCalculateBufferSize() wrapper function is comparable to the TMemoryStream::Size property. It uses the wavGetStreamInfo() wrapper function to calculate the size of the audio buffer. Also note that, as the stream handler interface is intrinsically linked to the filehandler interface, any data that you write to the stream will automatically be written to the file once the stream is closed. As such, if you're interested in manipulating only the content of the stream, you'll need to create a secondary stream that is not associated with a particular file. For more information on this task, see http://msdn.microsoft.com.

Now that you have a framework by which you can manipulate waveform audio files, let's return to our examination of the waveform API and investigate the means by which you can output waveform audio sound. As with the MCI, the functions and structures of the waveform API are declared and defined, respectively, in the MMSYSTEM.H header file.

Opening and Closing Waveform Audio Devices

The waveform API provides the waveOutOpen() function for use in opening a waveform audio output device. This function requires an initialized WAVEFORMATEX structure and, if successful, assigns an HWAVEOUT variable the handle to the open device. Similarly, the waveOutClose() function is used to close the waveform audio device. The wrapper functions of Listing 15.28 demonstrates the use of these functions.

Listing 15.28 Opening and Closing a Waveform Audio Device
 bool wavPlayOpen(HWAVEOUT& AHWavOut, long ACallback, DWORD ANotifyInstance,      DWORD AOpenFlags, WAVEFORMATEX& AFormatStruct)  {      return wavCheck(          waveOutOpen(&AHWavOut, WAVE_MAPPER, &AFormatStruct, ACallback,                      ANotifyInstance, AOpenFlags)          );  }  void wavPlayClose(HWAVEOUT AHWavOut)  {      waveOutReset(AHWavOut);      waveOutClose (AHWavOut);  } 

The ACallback , ANotifyInstance , and AOpenFlags parameters can be used to specify a means of notification; we will return to this issue shortly. For the AFormatStruct parameter, you can use the wavReadFormatStruct() wrapper function of Listing 15.26.

After an output device is open, you use the waveOutWrite() function to initiate playback. Specifically, you pass this function a pointer to a buffer of audio data that the function will send to the opened output device driver. However, the audio output device must know the size of the audio block that it's going to receive. For this reason, the waveOutWrite() function cannot accept a plain data buffer; instead, the function requires the address of a WAVEHDR structure.

Among other things, a WAVEHDR structure stores a pointer to the audio data buffer in its lpData data member and the length of this buffer in its dwBufferLength data member. To ensure compatibility with the output device, you must allow the driver to prepare your WAVEHDR structure before you can pass it to the waveOutWrite() function. This task is accomplished via the waveOutPrepareHeader() function. Similarly, after the device driver has finished playing the audio block, you must unprepare the header using the wavOutUnprepareHeader() function before you can free the associated memory. Listing 15.29 demonstrates the use of these functions.

Listing 15.29 Initiating and Ending Waveform Audio Playback
 bool wavPlayBegin(HWAVEOUT AHWavOut, WAVEHDR& AWavHdr)  {      if (wavCheck(waveOutPrepareHeader(AHWavOut, &AWavHdr, sizeof(WAVEHDR))))      {          return wavCheck(              waveOutWrite(AHWavOut, &AWavHdr, sizeof(WAVEHDR))              );      }      return false;  }  void wavPlayEnd(HWAVEOUT AHWavOut, WAVEHDR& AWavHdr)  {      waveOutReset(AHWavOut);      waveOutUnprepareHeader(AHWavOut, &AWavHdr, sizeof(WAVEHDR));  } 

Let's solidify these concepts with a simple example that demonstrates how to play a waveform audio file. An example project called Proj_DSPDemo.bpr , can be found on the companion CD-ROM in the DSPDemo folder, which is illustrated in Figure 15.8.

Figure 15.8. DSP demonstration.

graphics/15fig08.gif

Recall that, because the waveform API presents no native means of loading the audio data from disk, we will first need to use our AVIFile wrapper functions. After we read the audio data block from the stream into our buffer, we can then use our waveform API wrapper functions to control playback. The code for this example is presented in Listing 15.30.

Listing 15.30 Playing a Quantized Waveform Audio File
 const long MAX_BLOCK_SIZE = 6000 * 1024;  if (!OpenDialog1->Execute()) return;  const char* filename = OpenDialog1->FileName.c_str();  PAVIFILE pFile = NULL;  if (wavOpenFile(pFile, filename, OF_READ))  {      PAVISTREAM pStream = NULL;      if (wavOpenStream(pStream, pFile))      {          long block_size = wavCalcBufferSize(pStream);          if (block_size < MAX_BLOCK_SIZE)          {              char* buffer = new char[block_size];              if (wavReadStream(pStream, 0, block_size, buffer)                  == block_size)              {                  QuantizeBuffer(buffer, block_size);                  WAVEFORMATEX FormatStruct;                  if (wavReadFormatStruct(pStream, FormatStruct))                  {                      HWAVEOUT HWavOut;                      if (wavPlayOpen(HWavOut, NULL, NULL, NULL,                                      FormatStruct))                      {                          WAVEHDR WavHdr;                          memset(&WavHdr, 0, sizeof(WAVEHDR));                          WavHdr.lpData = buffer;                          WavHdr.dwBufferLength = block_size;                          if (wavPlayBegin(HWavOut, WavHdr))                          {                              ShowMessage("Playing: " +                                          AnsiString(filename));                              wavPlayEnd(HWavOut, WavHdr);                          }                          wavPlayClose(HWavOut);                      }                  }              }              delete [] buffer;          }          wavCloseStream(pStream);      }      wavCloseFile(pFile);  }  void QuantizeBuffer(char* ABuffer, long ABufferLength)  {      short int min_val = 0, max_val = 0;      for (int index = 0; index < ABufferLength; ++index)      {          if (ABuffer[index] < min_val) min_val = ABuffer[index];          if (ABuffer[index] > max_val) max_val = ABuffer[index];      }      for (int index = 0; index < ABufferLength; ++index)      {          if (ABuffer[index] < 0) ABuffer[index] = min_val;          if (ABuffer[index] > 0) ABuffer[index] = max_val;      }  }  

Notice in Listing 15.30 that we placed a limit on the size of the audio data block. Indeed, we would not want to load too large of a file so that we deplete system resources. When working with large audio data blocks, small segments of the media content are read from the stream, then sent to the driver in an iterative fashion. That is, after the driver has finished with the current buffer, we continually supply it with new information. However, for such a method to be successful, we will need a means by which we can determine when the driver has finished with the buffer. That is, the waveOutWrite() function returns immediately, so we have no way of knowing when the driver has completed playback. This is where the ACallback parameter of our wavPlayOpen() wrapper function comes in. Specifically, we can assign this argument the handle of a window or an event, an identifier of a thread, or even the address of a specific callback function. In this way, we can establish a crucial means of notification. See http://msdn.microsoft.com for information on the MM_MON_DONE message.

Concluding Remarks on the Waveform Audio Interface

We have covered a wide variety of multimedia functions, structures, messages, and macros, but there are still more to explore. For example, the Windows multimedia system also provides specific interfaces for controlling MIDI and AVI playback. Moreover, each of these interfaces, including the Waveform Audio Interface and the MCI, provides services for sound input as well as output. There is even an interface for working with audio mixer devices. See http://msdn.microsoft.com for more information. This latter area is particularly opportune when the volume of specific audio channels needs to be controlled.


   
Top


C++ Builder Developers Guide
C++Builder 5 Developers Guide
ISBN: 0672319721
EAN: 2147483647
Year: 2002
Pages: 253

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net