Abstract
Across NLP, a growing body of work is looking at the issue of reproducibility. However, replicability of human evaluation experiments and reproducibility of their results is currently under-addressed, and this is of particular concern for NLG where human evaluations are the norm. This paper outlines our ideas for a shared task on reproducibility of human evaluations in NLG which aims (i) to shed light on the extent to which past NLG evaluations have been replicable and reproducible, and (ii) to draw conclusions regarding how evaluations can be designed and reported to increase replicability and reproducibility. If the task is run over several years, we hope to be able to document an overall increase in levels of replicability and reproducibility over time
Original language | English |
---|---|
Pages | 232-236 |
Number of pages | 5 |
Publication status | Published - Dec 2020 |
Event | Proceedings of the 13th International Conference on Natural Language Generation - Held online Dublin City University, Dublin, Ireland Duration: 15 Dec 2020 → 18 Dec 2020 Conference number: 13 https://www.inlg2020.org/ |
Conference
Conference | Proceedings of the 13th International Conference on Natural Language Generation |
---|---|
Abbreviated title | INLG 2020 |
Country/Territory | Ireland |
City | Dublin |
Period | 15/12/20 → 18/12/20 |
Internet address |