Natural Language Generation Challenges for Explainable AI

Research output: Contribution to conferenceUnpublished paperpeer-review

Abstract

Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.
Original languageEnglish
Publication statusAccepted/In press - 1 Oct 2019
Event1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence - Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019

Workshop

Workshop1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
Country/TerritoryJapan
CityTokyo
Period29/10/191/11/19

Fingerprint

Dive into the research topics of 'Natural Language Generation Challenges for Explainable AI'. Together they form a unique fingerprint.

Cite this