Evaluating factual accuracy in complex data-to-text

Craig Thomson, Ehud Reiter, Barkavi Sundararajan

Research output: Contribution to journalArticlepeer-review

Abstract

It is essential that data-to-text Natural Language Generation (NLG) systems produce texts which are factually accurate. We examine accuracy issues in the task of generating summaries of basketball games, including what accuracy means in this context, how accuracy errors can be detected by human annotators, as well as the types of accuracy mistakes made by both neural NLG systems and human authors. We also look at the effectiveness of automatic metrics in measuring factual accuracy.
Original languageEnglish
Article number101482
JournalComputer Speech & Language
Early online date5 Jan 2023
DOIs
Publication statusE-pub ahead of print - 5 Jan 2023

Keywords

  • Natural Language Generation
  • Complex data-to-text
  • Evaluation
  • Annotation
  • Factual accuracy
  • Neural data-to-text

Fingerprint

Dive into the research topics of 'Evaluating factual accuracy in complex data-to-text'. Together they form a unique fingerprint.

Cite this