4 Concluding remarks


4 Concluding remarks

The model of multimodal act of referring we propose is based on an original representation of objects. It makes it possible to integrate information related to different modalities. Thanks to this model, the agent can understand a multimodal reference produced by another agent, and identify the referent . It can also produce a multimodal reference.

One of the characteristics of this model is that it can easily be extended beyond the referring to objects. Indeed, it makes also possible to refer in a multimodal way to relations existing between objects, as well as with other properties of multimodal utterances, like facial gesture reference to illocutionary force.

Our model is in the course of implementation within a system of dialogue, already operational [15] in natural language.

This model, once implemented, should partly allow richer and user -friendly dialogues . The number of media and modes of communication which could be used will diversify interactions. The users will be able to define their preferred modalities, to switch from one to another. User and system will be able to use several modalities at the same time, and adapt the presentation to the content. The multimedia and multimodal dimensions of information processing systems will finally be exploited to its right measure, i.e. in an intelligent and adapted way.

[15] This system integrates a dialoguing rational agent called ARTIMIS [SAD 97], founded on a theory of rational interaction [SAD 91].




Communicating With Smart Objects(c) Developing Technology for Usable[... ]stems 2003
Linux Troubleshooting for System Administrators and Power Users
ISBN: N/A
EAN: 2147483647
Year: 2005
Pages: 191

flylib.com © 2008-2017.
If you may any questions please contact us: flylib@qtcs.net