An Investigation into the Validity of Some Metrics for Automatically Evaluating Natural Language Generation Systems

Research output: Contribution to journalArticle

54 Citations (Scopus)

Abstract

There is growing interest in using automatically computed corpus-based evaluation metrics to evaluate Natural Language Generation (NLG) systems, because these are often considerably cheaper than the human-based evaluations which have traditionally been used in NLG. We review previous work on NLG evaluation and on validation of automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP (notably BLEU and ROUGE) correlate with human judgments in the domain of computer-generated weather forecasts. Our results suggest that, at least in this domain, metrics may provide a useful measure of language quality, although the evidence for this is not as strong as we would ideally like to see; however, they do not provide a useful measure of content quality. We also discuss a number of caveats which must be kept in mind when interpreting this and other validation studies.

Original languageEnglish
Pages (from-to)529-558
Number of pages30
JournalComputational Linguistics
Volume35
Issue number4
Early online date7 Dec 2009
DOIs
Publication statusPublished - Dec 2009

Keywords

  • weather forecasts

Cite this

An Investigation into the Validity of Some Metrics for Automatically Evaluating Natural Language Generation Systems. / Reiter, Ehud Baruch; Belz, Anja.

In: Computational Linguistics, Vol. 35, No. 4, 12.2009, p. 529-558.

Research output: Contribution to journalArticle

@article{a5596110d37a49eebd9e34e4230bcad2,
title = "An Investigation into the Validity of Some Metrics for Automatically Evaluating Natural Language Generation Systems",
abstract = "There is growing interest in using automatically computed corpus-based evaluation metrics to evaluate Natural Language Generation (NLG) systems, because these are often considerably cheaper than the human-based evaluations which have traditionally been used in NLG. We review previous work on NLG evaluation and on validation of automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP (notably BLEU and ROUGE) correlate with human judgments in the domain of computer-generated weather forecasts. Our results suggest that, at least in this domain, metrics may provide a useful measure of language quality, although the evidence for this is not as strong as we would ideally like to see; however, they do not provide a useful measure of content quality. We also discuss a number of caveats which must be kept in mind when interpreting this and other validation studies.",
keywords = "weather forecasts",
author = "Reiter, {Ehud Baruch} and Anja Belz",
year = "2009",
month = "12",
doi = "10.1162/coli.2009.35.4.35405",
language = "English",
volume = "35",
pages = "529--558",
journal = "Computational Linguistics",
issn = "0891-2017",
publisher = "MIT Press Journals",
number = "4",

}

TY - JOUR

T1 - An Investigation into the Validity of Some Metrics for Automatically Evaluating Natural Language Generation Systems

AU - Reiter, Ehud Baruch

AU - Belz, Anja

PY - 2009/12

Y1 - 2009/12

N2 - There is growing interest in using automatically computed corpus-based evaluation metrics to evaluate Natural Language Generation (NLG) systems, because these are often considerably cheaper than the human-based evaluations which have traditionally been used in NLG. We review previous work on NLG evaluation and on validation of automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP (notably BLEU and ROUGE) correlate with human judgments in the domain of computer-generated weather forecasts. Our results suggest that, at least in this domain, metrics may provide a useful measure of language quality, although the evidence for this is not as strong as we would ideally like to see; however, they do not provide a useful measure of content quality. We also discuss a number of caveats which must be kept in mind when interpreting this and other validation studies.

AB - There is growing interest in using automatically computed corpus-based evaluation metrics to evaluate Natural Language Generation (NLG) systems, because these are often considerably cheaper than the human-based evaluations which have traditionally been used in NLG. We review previous work on NLG evaluation and on validation of automatic metrics in NLP, and then present the results of two studies of how well some metrics which are popular in other areas of NLP (notably BLEU and ROUGE) correlate with human judgments in the domain of computer-generated weather forecasts. Our results suggest that, at least in this domain, metrics may provide a useful measure of language quality, although the evidence for this is not as strong as we would ideally like to see; however, they do not provide a useful measure of content quality. We also discuss a number of caveats which must be kept in mind when interpreting this and other validation studies.

KW - weather forecasts

U2 - 10.1162/coli.2009.35.4.35405

DO - 10.1162/coli.2009.35.4.35405

M3 - Article

VL - 35

SP - 529

EP - 558

JO - Computational Linguistics

JF - Computational Linguistics

SN - 0891-2017

IS - 4

ER -