Good quality explanations of artificial intelligence (XAI) reasoning must be written (and evaluated) for an explanatory purpose, targeted towards their readers, have a good narrative and causal structure, and highlight where uncertainty and data quality affect the AI output. I discuss these challenges from a Natural Language Generation (NLG) perspective, and highlight four specific “NLG for XAI” research challenges.
|Publication status||Accepted/In press - 1 Oct 2019|
|Event||1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence - Tokyo, Japan|
Duration: 29 Oct 2019 → 1 Nov 2019
|Workshop||1st Workshop on Interactive Natural Language Technology for Explainable Artificial Intelligence|
|Period||29/10/19 → 1/11/19|