Abstract
called LIME. Our study shows that selfreported rating of NLG explanation was
higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clearcut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
Original language | English |
---|---|
Title of host publication | Proceedings of The 11th International Natural Language Generation Conference |
Editors | Emiel Krahmer, Albert Gatt, Martijn Goudbeek |
Publisher | Association for Computational Linguistics (ACL) |
Pages | 177-182 |
Number of pages | 6 |
ISBN (Print) | 9781948087865 |
Publication status | Published - 30 Nov 2018 |
Event | 11th International Conference on Natural Language Generation (INLG 2018) - Tilburg University, Tilburg, Netherlands Duration: 5 Nov 2018 → 8 Nov 2018 |
Conference
Conference | 11th International Conference on Natural Language Generation (INLG 2018) |
---|---|
Country | Netherlands |
City | Tilburg |
Period | 5/11/18 → 8/11/18 |
Fingerprint
Cite this
Towards making NLG a voice for interpretable Machine Learning. / Forrest, James; Sripada, Somayajulu; Pang, Wei; Coghill, George.
Proceedings of The 11th International Natural Language Generation Conference. ed. / Emiel Krahmer; Albert Gatt; Martijn Goudbeek. Association for Computational Linguistics (ACL), 2018. p. 177-182 W18-6522.Research output: Chapter in Book/Report/Conference proceeding › Conference contribution
}
TY - GEN
T1 - Towards making NLG a voice for interpretable Machine Learning
AU - Forrest, James
AU - Sripada, Somayajulu
AU - Pang, Wei
AU - Coghill, George
N1 - I would like to acknowledge the support given to me by the Engineering and Physical Sciences Research Council (EPSRC) DTP grant number EP/N509814/1.
PY - 2018/11/30
Y1 - 2018/11/30
N2 - This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning frameworkcalled LIME. Our study shows that selfreported rating of NLG explanation washigher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clearcut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
AB - This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning frameworkcalled LIME. Our study shows that selfreported rating of NLG explanation washigher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clearcut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
M3 - Conference contribution
SN - 9781948087865
SP - 177
EP - 182
BT - Proceedings of The 11th International Natural Language Generation Conference
A2 - Krahmer, Emiel
A2 - Gatt, Albert
A2 - Goudbeek, Martijn
PB - Association for Computational Linguistics (ACL)
ER -