Natural Language Generation Challenges for Explainable AI

Research output: Contribution to conferencePaper

Abstract

Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.
Original languageEnglish
Publication statusAccepted/In press - 1 Oct 2019
Event1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence - Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019

Workshop

Workshop1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
CountryJapan
CityTokyo
Period29/10/191/11/19

Fingerprint Dive into the research topics of 'Natural Language Generation Challenges for Explainable AI'. Together they form a unique fingerprint.

  • Cite this

    Reiter, E. B. (Accepted/In press). Natural Language Generation Challenges for Explainable AI. Paper presented at 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Tokyo, Japan.