We present an exploratory study to assess machine transla- tion output for application in a dialogue system using an in- trinsic and an extrinsic evaluation method. For the intrinsic evaluation we developed an annotation scheme to determine the quality of the translated utterances in isolation. For the extrinsic evaluation we employed the Wizard of Oz technique to assess the quality of the translations in the context of a dialogue application. Results differ and we discuss the possible reasons for this outcome.
|Number of pages||8|
|Publication status||Published - Dec 2010|
|Event||7th International Workshop on Spoken Language Translation (IWSLT 2010) - Paris, France|
Duration: 2 Dec 2010 → …
|Workshop||7th International Workshop on Spoken Language Translation (IWSLT 2010)|
|Period||2/12/10 → …|