The World Wide Web Consortium (W3C) announced new work on extensions to components of the Speech Interface Framework which will both extend Speech Synthesis Markup Language functionality to Asian and other languages, and include speaker verification features into the next version of VoiceXML, version 3.0.
The Speech Synthesis Markup Language (SSML), a W3C Recommendation since 2004, is designed to provide a rich, XML-based markup language for assisting the generation of synthetic speech in Web and other applications. The essential role of the markup language is to provide authors of synthesizable content a standard way to control aspects of speech such as pronunciation, volume, pitch, and rate across different synthesis-capable platforms.
While these attributes are critical, additional attributes may be even more important to specific languages. For example, Mandarin Chinese, the most widely spoken language in the world today, also has the notion of tones – the same written character can have multiple pronunciations and meanings based on the tone used. Given the profusion of cellphones in China – some estimate as high as over one billion – the case for extending SSML for Mandarin is clear in terms of sheer market forces.
Including extensions for Japanese, Korean and other languages will ensure that a fuller participation possible of the world on the Web.
The W3C is an international industry consortium jointly run by the MIT Computer Science and Artificial Intelligence Laboratory (MIT CSAIL) in the USA, the European Research Consortium for Informatics and Mathematics (ERCIM) headquartered in France and Keio University in Japan.