Abstract
Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.
Original language | English |
---|---|
Publication status | Accepted/In press - 1 Oct 2019 |
Event | 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence - Tokyo, Japan Duration: 29 Oct 2019 → 1 Nov 2019 |
Workshop
Workshop | 1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence |
---|---|
Country/Territory | Japan |
City | Tokyo |
Period | 29/10/19 → 1/11/19 |