Spoken, Multilingual and Multimodal Dialogues Systems: Development and Assessment addresses the great demand for information about the development of advanced dialogue systems combining speech with other modalities under a multilingual framework. It aims to give a systematic overview of dialogue systems and recent advances in the practical application of spoken dialogue systems.
Spoken Dialogue Systems are computer–based systems developed to provide information and carry out simple tasks using speech as the interaction mode. Examples include travel information and reservation, weather forecast information, directory information and product order. Multimodal Dialogue Systems aim to overcome the limitations of spoken dialogue systems which use speech as the only communication means, while Multilingual Systems allow interaction with users that speak different languages.
- Presents a clear snapshot of the structure of a standard dialogue system, by addressing its key components in the context of multilingual and multimodal interaction and the assessment of spoken, multilingual and multimodal systems
- In addition to the fundamentals of the technologies employed, the development and evaluation of these systems are described
- Highlights recent advances in the practical application of spoken dialogue systems
This comprehensive overview is a must for graduate students and academics in the fields of speech recognition, speech synthesis, speech processing, language, and human computer interaction technology. It will also prove to be a valuable resource to system developers working in these areas.
1. Introduction to Dialogue Systems.
1.1 Human–Computer Interaction and Speech Processing.
1.2 Spoken Dialogue Systems.
1.3 Multimodal Dialogue Systems.
1.4 Multilingual Dialogue Systems.
1.5 Dialogue Systems Referenced in This Book.
1.6 Area Organisation and Research Directions.
1.7 Overview of the Book.
1.8 Further Reading.
2. Technologies Employed to Set Up Dialogue Systems.
2.1 Input Interface.
2.2 Multimodal Processing.
2.3 Output Interface.
2.5 Further Reading.
3. Multimodal Dialogue Systems.
3.1 Benefits of Multimodal Interaction.
3.2 Development of Multimodal Dialogue Systems.
3.5 Further Reading.
4. Multilingual Dialogue Systems.
4.1 Implications of Multilinguality in the Architecture of Dialogue Systems.
4.2 Multilingual Dialogue Systems Based on Interlingua.
4.3 Multilingual Dialogue Systems Based on Web Applications.
4.5 Further Reading.
5. Dialogue Annotation, Modelling and Management.
5.1 Dialogue Annotation.
5.2 Dialogue Modelling.
5.3 Dialogue Management.
5.4 Implications of Multimodality in the Dialogue Management.
5.5 Implications of Mulitlinguality in the Dialogue Management.
5.6 Implications of Task Independency in the Dialogue Management.
5.8 Further Reading.
6. Development Tools.
6.1 Tools for Spoken and Multilingual Dialogue Systems.
6.2 Standards and Tools for Multimodal Dialogue Systems.
6.4 Further Reading.
7.1 Overview of Evaluation Techniques.
7.2 Evaluation of Spoken and Multilingual Dialogue Systems.
7.3 Evaluation of Multimodal Dialogue Systems.
7.5 Further Reading.
Appendix A: Basic Tutorial on VoiceXML.
Appendix B: Multimodal Databases.
Appendix C: Coding Schemes for Multimodal Resources.
Appendix D: URLs of Interest.
Appendix E: List of Abbreviations.
Masahiro Araki is an Associate Professor at Department of Electronics and Information Science, Kyoto Institute of Technology. His current interests are spoken dialogue processing and artificial intelligence. He is a member of ACL and ISCA.
João P. Neto is Assistant Professor at Instituto Superior Técnico (IST), Technical University of Lisbon in signal theory, discrete signal processing, control systems and neural networks. His research interests focus on spoken, multimodal and multilingual dialogue systems, speech recognition and understanding, dialogue management and speech synthesis.