Towards making NLG a voice for interpretable Machine Learning

James Forrest, Somayajulu Sripada, Wei Pang, George Coghill

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution

13 Citations (Scopus)
12 Downloads (Pure)

Abstract

This paper presents a study to understand the issues related to using NLG to humanise explanations from a popular interpretable machine learning framework
called LIME. Our study shows that selfreported rating of NLG explanation was
higher than that for a non-NLG explanation. However, when tested for comprehension, the results were not as clearcut showing the need for performing more studies to uncover the factors responsible for high-quality NLG explanations.
Original languageEnglish
Title of host publicationProceedings of The 11th International Natural Language Generation Conference
EditorsEmiel Krahmer, Albert Gatt, Martijn Goudbeek
PublisherAssociation for Computational Linguistics (ACL)
Pages177-182
Number of pages6
ISBN (Print)9781948087865
Publication statusPublished - 30 Nov 2018
Event11th International Conference on Natural Language Generation (INLG 2018) - Tilburg University, Tilburg, Netherlands
Duration: 5 Nov 20188 Nov 2018

Conference

Conference11th International Conference on Natural Language Generation (INLG 2018)
Country/TerritoryNetherlands
CityTilburg
Period5/11/188/11/18

Bibliographical note

I would like to acknowledge the support given to me by the Engineering and Physical Sciences Research Council (EPSRC) DTP grant number EP/N509814/1.

Fingerprint

Dive into the research topics of 'Towards making NLG a voice for interpretable Machine Learning'. Together they form a unique fingerprint.

Cite this