Comparing intrinsic and extrinsic evaluation of MT output in a dialogue system

Anne H Schneider, Ielka van der Sluis, Saturnino Luz

Research output: Contribution to conferencePaper

Abstract

We present an exploratory study to assess machine transla- tion output for application in a dialogue system using an in- trinsic and an extrinsic evaluation method. For the intrinsic evaluation we developed an annotation scheme to determine the quality of the translated utterances in isolation. For the extrinsic evaluation we employed the Wizard of Oz technique to assess the quality of the translations in the context of a dialogue application. Results differ and we discuss the possible reasons for this outcome.
Original languageEnglish
Pages329-336
Number of pages8
Publication statusPublished - Dec 2010
Event7th International Workshop on Spoken Language Translation (IWSLT 2010) - Paris, France
Duration: 2 Dec 2010 → …

Workshop

Workshop7th International Workshop on Spoken Language Translation (IWSLT 2010)
CountryFrance
CityParis
Period2/12/10 → …

ASJC Scopus subject areas

  • Computer Science(all)

Cite this

Schneider, A. H., van der Sluis, I., & Luz, S. (2010). Comparing intrinsic and extrinsic evaluation of MT output in a dialogue system. 329-336. Paper presented at 7th International Workshop on Spoken Language Translation (IWSLT 2010), Paris, France.

Comparing intrinsic and extrinsic evaluation of MT output in a dialogue system. / Schneider, Anne H; van der Sluis, Ielka; Luz, Saturnino.

2010. 329-336 Paper presented at 7th International Workshop on Spoken Language Translation (IWSLT 2010), Paris, France.

Research output: Contribution to conferencePaper

Schneider, AH, van der Sluis, I & Luz, S 2010, 'Comparing intrinsic and extrinsic evaluation of MT output in a dialogue system' Paper presented at 7th International Workshop on Spoken Language Translation (IWSLT 2010), Paris, France, 2/12/10, pp. 329-336.
Schneider AH, van der Sluis I, Luz S. Comparing intrinsic and extrinsic evaluation of MT output in a dialogue system. 2010. Paper presented at 7th International Workshop on Spoken Language Translation (IWSLT 2010), Paris, France.
Schneider, Anne H ; van der Sluis, Ielka ; Luz, Saturnino. / Comparing intrinsic and extrinsic evaluation of MT output in a dialogue system. Paper presented at 7th International Workshop on Spoken Language Translation (IWSLT 2010), Paris, France.8 p.
@conference{cd37b0d35ab44c62aa5d47f0a30db5aa,
title = "Comparing intrinsic and extrinsic evaluation of MT output in a dialogue system",
abstract = "We present an exploratory study to assess machine transla- tion output for application in a dialogue system using an in- trinsic and an extrinsic evaluation method. For the intrinsic evaluation we developed an annotation scheme to determine the quality of the translated utterances in isolation. For the extrinsic evaluation we employed the Wizard of Oz technique to assess the quality of the translations in the context of a dialogue application. Results differ and we discuss the possible reasons for this outcome.",
author = "Schneider, {Anne H} and {van der Sluis}, Ielka and Saturnino Luz",
year = "2010",
month = "12",
language = "English",
pages = "329--336",
note = "7th International Workshop on Spoken Language Translation (IWSLT 2010) ; Conference date: 02-12-2010",

}

TY - CONF

T1 - Comparing intrinsic and extrinsic evaluation of MT output in a dialogue system

AU - Schneider, Anne H

AU - van der Sluis, Ielka

AU - Luz, Saturnino

PY - 2010/12

Y1 - 2010/12

N2 - We present an exploratory study to assess machine transla- tion output for application in a dialogue system using an in- trinsic and an extrinsic evaluation method. For the intrinsic evaluation we developed an annotation scheme to determine the quality of the translated utterances in isolation. For the extrinsic evaluation we employed the Wizard of Oz technique to assess the quality of the translations in the context of a dialogue application. Results differ and we discuss the possible reasons for this outcome.

AB - We present an exploratory study to assess machine transla- tion output for application in a dialogue system using an in- trinsic and an extrinsic evaluation method. For the intrinsic evaluation we developed an annotation scheme to determine the quality of the translated utterances in isolation. For the extrinsic evaluation we employed the Wizard of Oz technique to assess the quality of the translations in the context of a dialogue application. Results differ and we discuss the possible reasons for this outcome.

M3 - Paper

SP - 329

EP - 336

ER -