Summary

 < Day Day Up > 

  • To be competitive in today's business environment, companies must design unique and functional applications. The Microsoft .NET Speech SDK offers an easy way to add speech capabilities to ASP.NET applications. Multimodal applications offer the user a choice of input mechanisms. In today's environment of smaller mobile devices, the use of speech can be a much-needed benefit.

  • When designing multi-modal applications, it is not enough to speech-enable every input. Since the user has a choice not to use speech, intuitive and additional functionality must be added to entice the user.

  • The Query grammar for the sample application is more complex than the one presented in Chapter 3. It involves a hierarchy of rules combined with the use of list elements and rule refs.

  • Optional Preamble and Postamble rules are used to account for additional words in a spoken phrase. For instance, it is not necessary to capture "List all classes" from the phrase "List all classes for Doctor Davis." Instead, the first three words are part of an optional subphrase. The grammar would still be considered valid if the subphrase was included, but the subphrase itself would not be captured as a semantic item.

  • Query results are displayed to the user visually through a standard data grid. In addition, the detail for each course is spoken in a continuous fashion so the student is not forced to drill down into each course to get all the information. This functionality is accomplished by setting the ShortInitialTimeout property to a value greater than zero.

  • The manifest.xml file is used to preload resources, such as grammar files, used by Speech Services.

  • When running a multimodal application on a PocketPC device, you must first install the speech add-in for Internet Explorer on the device. Applications designed for the PocketPC must also set the MIME type to ensure that the Speech-Addin can instantiate SALT objects.

  • The Microsoft .NET Speech SDK offers several options for application tuning. One option allows you to enable Speech Debugging Console Logging for each successful or unsuccessful recognition. SML will be captured to an XML-based file, which can be very helpful for testing an application and during the initial rollout phase. Developers can also analyze trace session files created on the Speech Server using one of three log utilities provided with the SASDK.

     < Day Day Up > 


    Building Intelligent  .NET Applications(c) Agents, Data Mining, Rule-Based Systems, and Speech Processing
    Building Intelligent .NET Applications(c) Agents, Data Mining, Rule-Based Systems, and Speech Processing
    ISBN: N/A
    EAN: N/A
    Year: 2005
    Pages: 123

    flylib.com © 2008-2017.
    If you may any questions please contact us: flylib@qtcs.net