Natural Language Generation Challenges for Explainable AI

Research output: Contribution to conferencePaper

Abstract

Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.
Original languageEnglish
Publication statusAccepted/In press - 1 Oct 2019
Event1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence - Tokyo, Japan
Duration: 29 Oct 20191 Nov 2019

Workshop

Workshop1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence
CountryJapan
CityTokyo
Period29/10/191/11/19

Fingerprint

Artificial intelligence
Uncertainty

Cite this

Reiter, E. B. (Accepted/In press). Natural Language Generation Challenges for Explainable AI. Paper presented at 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Tokyo, Japan.

Natural Language Generation Challenges for Explainable AI. / Reiter, Ehud B.

2019. Paper presented at 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Tokyo, Japan.

Research output: Contribution to conferencePaper

Reiter, EB 2019, 'Natural Language Generation Challenges for Explainable AI' Paper presented at 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Tokyo, Japan, 29/10/19 - 1/11/19, .
Reiter EB. Natural Language Generation Challenges for Explainable AI. 2019. Paper presented at 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Tokyo, Japan.
Reiter, Ehud B. / Natural Language Generation Challenges for Explainable AI. Paper presented at 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence, Tokyo, Japan.
@conference{93d6ab5c49c948338a4b9f2223afd9eb,
title = "Natural Language Generation Challenges for Explainable AI",
abstract = "Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.",
author = "Reiter, {Ehud B.}",
note = "This paper started off as a (much shorter) blog https://ehudreiter.com/2019/07/19/nlg-and-explainable-ai/. My thanks to the people who commented on this blog, as well as the anonymous reviewers, the members of the Aberdeen CLAN research group, the members of the Explaining the Outcomes of Complex Models project at Monash, and the members of the NL4XAI research project, all of whom gave me excellent feedback and suggestions. My thanks also to Prof Rene van der Wal for his help in the experiment mentioned in section 3.; 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence ; Conference date: 29-10-2019 Through 01-11-2019",
year = "2019",
month = "10",
day = "1",
language = "English",

}

TY - CONF

T1 - Natural Language Generation Challenges for Explainable AI

AU - Reiter, Ehud B.

N1 - This paper started off as a (much shorter) blog https://ehudreiter.com/2019/07/19/nlg-and-explainable-ai/. My thanks to the people who commented on this blog, as well as the anonymous reviewers, the members of the Aberdeen CLAN research group, the members of the Explaining the Outcomes of Complex Models project at Monash, and the members of the NL4XAI research project, all of whom gave me excellent feedback and suggestions. My thanks also to Prof Rene van der Wal for his help in the experiment mentioned in section 3.

PY - 2019/10/1

Y1 - 2019/10/1

N2 - Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.

AB - Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.

M3 - Paper

ER -