ReproGen: Proposal for a Shared Task on Reproducibility of Human Evaluations in NLG

Anya Belz, Shubham Agarwal, Ehud Reiter, Anastasia Shimorina

Research output: Contribution to conferenceUnpublished paperpeer-review

11 Citations (Scopus)
2 Downloads (Pure)

Abstract

Across NLP, a growing body of work is looking at the issue of reproducibility. However, replicability of human evaluation experiments and reproducibility of their results is currently under-addressed, and this is of particular concern for NLG where human evaluations are the norm. This paper outlines our ideas for a shared task on reproducibility of human evaluations in NLG which aims (i) to shed light on the extent to which past NLG evaluations have been replicable and reproducible, and (ii) to draw conclusions regarding how evaluations can be designed and reported to increase replicability and reproducibility. If the task is run over several years, we hope to be able to document an overall increase in levels of replicability and reproducibility over time
Original languageEnglish
Pages232-236
Number of pages5
Publication statusPublished - Dec 2020
EventProceedings of the 13th International Conference on Natural Language Generation - Held online Dublin City University, Dublin, Ireland
Duration: 15 Dec 202018 Dec 2020
Conference number: 13
https://www.inlg2020.org/

Conference

ConferenceProceedings of the 13th International Conference on Natural Language Generation
Abbreviated titleINLG 2020
Country/TerritoryIreland
CityDublin
Period15/12/2018/12/20
Internet address

Fingerprint

Dive into the research topics of 'ReproGen: Proposal for a Shared Task on Reproducibility of Human Evaluations in NLG'. Together they form a unique fingerprint.

Cite this