Explanation in Human-AI Systems: A Literature Meta-Review Synopsis of Key Ideas and Publications and Bibliography for Explainable AI
Abstract:
This review that address the question, "What makes for a good explanation?" with reference to AI systems. The Report encapsulates the history of computer science efforts to create systems that explain and instruct, the explainability issues and challenges in modern AI, and the leading psychological theories of explanation. Methodological guidance for the evaluation of XAI systems emphasizes the differences between global and local explanations, the need to evaluate the performance of the human-machine work system and the need to recognize that experiment procedures tacitly impose on the user the burden of self-explanation. Tasks that involve human-AI interactivity and co-adaptation, such as bug or oddball detection, hold promise for XAI evaluation since they too conform to the notions of "explanation-as-exploration" and explanation as a co-adaptive dialog process. Tasks that involve predicting the AI's determinations, combined with post-experimental interviews, hold promise for the study of mental models in the XAI context.