Advanced Deinterlace Settings


We have seen that the VMR supports hardware-accelerated deinterlacing, in which the GPU performs deinterlacing of interlaced content. Hardware-accelerated deinterlacing is automatically enabled whenever the graphics driver supports it. However, some cards support more than one deinterlacing technique. Therefore, you can query the VMR to get the driver s list of deinterlacing techniques and specify which technique to use. This is considered something of an advanced feature, if you want the very best rendering quality. The Amplify sample enables this feature when you play interlaced content ( assuming the graphics driver also supports it).

Deinterlacing techniques include:

  • Vertical stretching of the field lines, using interpolation to create missing pixel values. (This technique is a variant of bob deinterlacing.)

  • Median filtering, where pixel values are recreated by taking the median value from a set of nearby pixels.

  • Edge filtering, in which an edge detection filter is used to generate the missing information.

  • Motion vectors, in which picture objects are tracked from one frame to the next as they move.

To get a list of the available techniques, we need to connect the decoder to the VMR, because the exact list may depend on the video format. In order to switch to a new technique, however, the decoder may need to allocate more buffers. For example, some techniques require the graphics card to examine the previous N frames . Because buffer allocation happens when the pins are connected, they must be disconnected and then reconnected before the new technique can go into effect.

Here is an outline of the process:

  1. Get a list of the VMR s input pins and the corresponding output pins on the decoders. You ll need these so that you can reconnect the pins later.

  2. Get the video format from the pin. The format is returned in the form of an AM_MEDIA_TYPE structure.

  3. Call IVMRDeinterlaceControl9::GetNumberOfDeinterlaceModes . This method uses a VMR9VideoDesc structure to describe the video format, so you need to convert the AM_MEDIA_TYPE structure. The GetNumberOfDeinterlaceModes method returns an array of GUIDs that identify the available techniques. GUIDs are used so that a driver can define its own proprietary techniques.

  4. For each deinterlacing GUID, get a description of the deinterlacing technique by calling GetDeinterlaceModeCaps . The method returns a VMR9DeinterlaceCaps structure that describes the technique.

  5. To set the deinterlacing technique, call SetDeinterlaceMode with the pin number and the GUID. Stop the filter graph, reconnect the pins, and start the graph again.

The next section describes each step in detail.

Get the Video Format

To get the current media type for a pin connection, call IPin::ConnectionMediaType on either pin. The Amplify sample only renders one video stream, so there is only a single pin. In general, you must do this for every connected input pin, because each video stream can have a different format.

 // Find the list of input pins.  PinList pins;  GetPinList(pVMR, PINDIR_INPUT, pins);  // Look for the input pin that is connected to a decoder.  for (PinIterator iter = pins.begin(); iter != pins.end(); iter++)  {      CComPtr<IPin> pConnected;      hr = (*iter)->ConnectedTo(&pConnected);      if (SUCCEEDED(hr))      {          m_pPinIn = *iter;          m_pPinOut = pConnected;          break;      }  }  // Get the media type.  AM_MEDIA_TYPE mt;  hr = m_pPinIn->ConnectionMediaType(&mt); 

Convert the Media Type

The VMR s deinterlace functions use a VMR9VideoDesc structure to describe the video format, instead of the usual DirectShow media type structure. In this section, we present some helper code to perform this conversion.

First, we need a function that checks whether a video format is interlaced. If not, there s no need to continue.

 bool IsVideoInterlaced(const AM_MEDIA_TYPE& mt)  {      // Is this a VideoInfo2 format?      if ((mt.formattype == FORMAT_VideoInfo2) && (mt.pbFormat != NULL) &&          (mt.cbFormat >= sizeof(VIDEOINFOHEADER2)))      {          VIDEOINFOHEADER2 *pVih = (VIDEOINFOHEADER2*)mt.pbFormat;          if (pVih->dwInterlaceFlags & AMINTERLACE_IsInterlaced)          {              return true;          }      }      return false;  } 

Two format types may be used for uncompressed video, either VIDEOINFOHEADER or VIDEOINFOHEADER2 . All interlaced formats use the latter. The AMINTERLACE_IsInterlaced flag in the dwInterlaceFlags field indicates that the video is interlaced. If that flag is absent, the video is progressive.

The other flags in the dwInterlaceFlags field describe the type of interlacing. The ConvertInterlaceFlags function converts these flags into an equivalent value from the VMR9_SampleFormat enumeration:

 #define IsInterlaced(x) ((x) & AMINTERLACE_IsInterlaced)  #define IsSingleField(x) ((x) & AMINTERLACE_1FieldPerSample)  #define IsField1First(x) ((x) & AMINTERLACE_Field1First)  VMR9_SampleFormat ConvertInterlaceFlags(DWORD dwInterlaceFlags)  {      if (IsInterlaced(dwInterlaceFlags)) {          if (IsSingleField(dwInterlaceFlags)) {              if (IsField1First(dwInterlaceFlags)) {                  return VMR9_SampleFieldSingleEven;              }              else {                  return VMR9_SampleFieldSingleOdd;              }          }          else {               if (IsField1First(dwInterlaceFlags)) {                  return VMR9_SampleFieldInterleavedEvenFirst;               }              else {                  return VMR9_SampleFieldInterleavedOddFirst;              }          }      }      else {          return VMR9_SampleProgressiveFrame;  // Not interlaced.      }  } 

The ConvertMediaTypeToVideoDesc function converts a media type to a VMR9VideoDesc structure.

 HRESULT ConvertMediaTypeToVideoDesc(      const AM_MEDIA_TYPE& mt, VMR9VideoDesc& desc)  {      // Verify that the media type is VideoInfo2.      if ((mt.formattype != FORMAT_VideoInfo2)           (mt.pbFormat == NULL)           (mt.cbFormat < sizeof(VIDEOINFOHEADER2)))      {          return E_FAIL;      }      VIDEOINFOHEADER2 *pVih = (VIDEOINFOHEADER2*)mt.pbFormat;      BITMAPINFOHEADER *pBMI = &pVih->bmiHeader;      desc.dwSize = sizeof(VMR9VideoDesc);      desc.dwSampleWidth = abs(pBMI->biWidth);      desc.dwSampleHeight = abs(pBMI->biHeight);      desc.SampleFormat = ConvertInterlaceFlags(pVih->dwInterlaceFlags);      desc.dwFourCC = pBMI->biCompression;      // Check for well-known frame rates.      switch (pVih->AvgTimePerFrame)      {          case 166833:  // NTSC, 59.94 fps.              desc.InputSampleFreq.dwNumerator = 60000;              desc.InputSampleFreq.dwDenominator = 1001;              break;          case 333667:  // NTSC, 29.97 fps.              desc.InputSampleFreq.dwNumerator = 30000;              desc.InputSampleFreq.dwDenominator = 1001;              break;          case 417188:  // NTSC, 23.97 fps.              desc.InputSampleFreq.dwNumerator = 24000;              desc.InputSampleFreq.dwDenominator = 1001;              break;          case 200000:  // PAL, 50 fps.              desc.InputSampleFreq.dwNumerator = 50;              desc.InputSampleFreq.dwDenominator = 1;              break;          case 400000:  // PAL, 25 fps.              desc.InputSampleFreq.dwNumerator = 25;              desc.InputSampleFreq.dwDenominator = 1;              break;          default:  // Unknown.              desc.InputSampleFreq.dwNumerator = 10000000;              desc.InputSampleFreq.dwDenominator =                  (DWORD)pVih->AvgTimePerFrame;          }          // Calculate the output frequency.      if (desc.SampleFormat == VMR9_SampleFieldInterleavedEvenFirst           desc.SampleFormat == VMR9_SampleFieldInterleavedOddFirst)      {          desc.OutputFrameFreq.dwNumerator =              desc.InputSampleFreq.dwNumerator * 2;      }      else      {          desc.OutputFrameFreq.dwNumerator =              desc.InputSampleFreq.dwNumerator;      }      return S_OK;  } 

Note that the media type gives the duration of each frame, in 100-nanosecond units, while the VMR9VideoDesc structure gives the frame rate , in frames per second, expressed as a fractional value. Interlaced video generally comes in one of several standard formats, and we check for these so that we can assign a more accurate number. For example, NTSC television is 59.94 fields per second (60000/1000), which gets rounded to 166833 — 10 -7 seconds in the media type.

With these functions defined, we can convert the video format as follows .

 if (IsVideoInterlaced(mt))  {      hr = ConvertMediaTypeToVideoDesc(mt, m_VideoDesc);  } 

Query for Deinterlacing Techniques

Once you have determined whether a video is interlaced, and have filled in the VMR9VideoDesc structure, you can get the list of GUIDs that define the deinterlace techniques. Call GetNumberOfDeinterlaceModes to find out how many techniques are supported by the graphics card. If the number is more than zero, allocate an array of GUIDs and call GetNumberOfDeinterlaceModes again to fill in the array:

 DWORD cModes = 0;  hr = m_pDeinterlace->GetNumberOfDeinterlaceModes(&m_VideoDesc,      &cModes, NULL);  if (SUCCEEDED(hr) && (cModes > 0))  {      m_pGuids = new GUID[cModes];      m_cGuids = cModes;      hr = m_pDeinterlace->GetNumberOfDeinterlaceModes(&m_VideoDesc,          &cModes, m_pGuids);  } 

The first time that we call GetNumberOfDeinterlaceModes , the third parameter is set to NULL . This call returns the number of techniques in the cModes parameter. If the number is more than zero, we allocate the GUID array and call GetNumberOfDeinterlaceModes again. This time, we pass in the address of the array as the third parameter. This second call fills in the array.

Set the Deinterlacing Technique

To change the deinterlace mode, call SetDeinterlaceMode with the pin number and a pointer to the GUID. Then stop the graph and reconnect the pins:

 HRESULT CVMRGraph::SetDeinterlaceTechnique(DWORD index)  {      if (index > m_cModes)      {          return E_INVALIDARG;      }      m_pGraph->Stop();      m_pDeinterlace->SetDeinterlaceMode(0, m_pGuids + index);      HRESULT hr = m_pGraph->ReconnectPins(m_pPinOut, m_pPinIn);      if (SUCCEEDED(hr))      {          m_pGraph->Run();      }      return hr;  }  HRESULT CVMRGraph::ReconnectPins(IPin *pOut, IPin *pIn)  {      AM_MEDIA_TYPE mt;      HRESULT hr = pOut->ConnectionMediaType(&mt);      if (FAILED(hr))      {          return hr;      }      m_pGraph->Disconnect(pOut);      m_pGraph->Disconnect(pIn);      hr = m_pGraph->ConnectDirect(pOut, pIn, &mt);      MyFreeMediaType(mt);      return hr;  } 

The IGraphBuilder::Disconnect method disconnects a pin. You must call the method on both pins. Otherwise, one of the pins thinks it s still connected, causing unpredictable behavior in the graph. The IGraphBuilder::ConnectDirect method connects two pins without inserting any additional filters between them. To learn more about deinterlacing support in the VMR, refer to the topic Setting Deinterlace Preferences in the DirectX SDK documentation.




Fundamentals of Audio and Video Programming for Games
Fundamentals of Audio and Video Programming for Games (Pro-Developer)
ISBN: 073561945X
EAN: 2147483647
Year: 2003
Pages: 120

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net