Abstract
We investigate the data collected for the Accuracy Evaluation Shared Task as a retrospective reproduction study. The shared task was based upon errors found by human annotation of com- puter generated summaries of basketball games. Annotation was performed in three separate stages, with texts taken from the same three systems and checked for errors by the same three annotators. We show that the mean count of errors was consistent at the highest level for each experiment, with increased variance when looking at per-system and/or per-error- type breakdowns.
Original language | English |
---|---|
Title of host publication | Proceedings of the 15th International Conference on Natural Language Generation: Generation Challenges |
Place of Publication | Waterville, Maine, USA and virtual meeting |
Publisher | Association for Computational Linguistics |
Pages | 71-79 |
Number of pages | 9 |
Publication status | Published - 1 Jul 2022 |