Abstract
This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning framework
called LIME. Our study shows that selfreported rating of NLG explanation was
higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clearcut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
called LIME. Our study shows that selfreported rating of NLG explanation was
higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clearcut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
Original language | English |
---|---|
Title of host publication | Proceedings of The 11th International Natural Language Generation Conference |
Editors | Emiel Krahmer, Albert Gatt, Martijn Goudbeek |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 177-182 |
Number of pages | 6 |
ISBN (Print) | 9781948087865 |
Publication status | Published - 30 Nov 2018 |
Event | 11th International Conference on Natural Language Generation (INLG 2018) - Tilburg University, Tilburg, Netherlands Duration: 5 Nov 2018 → 8 Nov 2018 |
Conference
Conference | 11th International Conference on Natural Language Generation (INLG 2018) |
---|---|
Country/Territory | Netherlands |
City | Tilburg |
Period | 5/11/18 → 8/11/18 |