Towards making NLG a voice for interpretable Machine Learning

Research output: Chapter in Book/Report/Conference proceedingConference contribution

9 Downloads (Pure)

Abstract

This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning framework
called LIME. Our study shows that selfreported rating of NLG explanation was
higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clearcut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
Original languageEnglish
Title of host publicationProceedings of The 11th International Natural Language Generation Conference
EditorsEmiel Krahmer, Albert Gatt, Martijn Goudbeek
PublisherAssociation for Computational Linguistics (ACL)
Pages177-182
Number of pages6
ISBN (Print)9781948087865
Publication statusPublished - 30 Nov 2018
Event11th International Conference on Natural Language Generation (INLG 2018) - Tilburg University, Tilburg, Netherlands
Duration: 5 Nov 20188 Nov 2018

Conference

Conference11th International Conference on Natural Language Generation (INLG 2018)
CountryNetherlands
CityTilburg
Period5/11/188/11/18

Fingerprint Dive into the research topics of 'Towards making NLG a voice for interpretable Machine Learning'. Together they form a unique fingerprint.

  • Cite this

    Forrest, J., Sripada, S., Pang, W., & Coghill, G. (2018). Towards making NLG a voice for interpretable Machine Learning. In E. Krahmer, A. Gatt, & M. Goudbeek (Eds.), Proceedings of The 11th International Natural Language Generation Conference (pp. 177-182). [W18-6522] Association for Computational Linguistics (ACL). https://aclanthology.coli.uni-saarland.de/volumes/proceedings-of-the-11th-international-conference-on-natural-language-generation