Abstract
The effect of mistranslations on the verbal behaviour of users of speech-to-speech translation is investigated through a question answering experiment in which users were presented with machine translated questions through synthesised speech. Results show that people are likely to align their verbal behaviour to the output of a system that combines machine translation, speech recognition and speech synthesis in an interactive dialogue context, even when the system produces erroneous output. The alignment phenomenon has been previously considered by dialogue system designers from the perspective of the benefits it might bring to the interaction (e.g. by making the user more likely to employ terms contained in the system’s vocabulary). In contrast, our results reveal that in speech-to-speech translation systems alignment can in fact be detrimental to the interaction (e.g. by priming the user to align with non-existing lexical items produced by mistranslation). The implications of these findings are discussed with respect to the design of such systems
Original language | English |
---|---|
Pages | 254-260 |
Number of pages | 7 |
Publication status | Published - Dec 2011 |
Event | International Workshop on Spoken Language Translation 2011 (IWSLT'11) - San Francisco, United States Duration: 8 Dec 2011 → 9 Dec 2011 |
Workshop
Workshop | International Workshop on Spoken Language Translation 2011 (IWSLT'11) |
---|---|
Country/Territory | United States |
City | San Francisco |
Period | 8/12/11 → 9/12/11 |