Natural Language Generation Challenges for Explainable AI

Research output: Contribution to conferenceUnpublished paperpeer-review

24 Citations (Scopus)

Abstract

Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.
Original languageEnglish
Publication statusAccepted/In press - 1 Oct 2019
Event1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence - Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019

Workshop

Workshop1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
Country/TerritoryJapan
CityTokyo
Period29/10/191/11/19

Bibliographical note

This paper started off as a (much shorter) blog
https://ehudreiter.com/2019/07/19/nlg-and-explainable-ai/. My thanks to the people who commented on this blog, as well as the anonymous reviewers, the members of the Aberdeen CLAN research group, the members of the Explaining the Outcomes of Complex Models project at Monash, and the members of the NL4XAI research project, all of whom gave me excellent feedback and suggestions. My thanks also to Prof Rene van der Wal for his help in the experiment mentioned in section 3.

Fingerprint

Dive into the research topics of 'Natural Language Generation Challenges for Explainable AI'. Together they form a unique fingerprint.

Cite this