Evaluating Explainable AI (XAI) in Terms of User Gender and Educational Background (Preprint)
Abstract:
In this paper we share the results of an in-depth study that compares how people empirically react to different types of XAI, particularly different types of textual explanations and word clouds. Specifically, the purpose of this research is to investigate how explanations in recommendation engines affects a users trust and comprehension in a recommendation system. We found that gender and educational background had an interactive effect on both trust and under-standing and that different explanations were preferred by different groups. However, the word cloud XAI was overwhelmingly rejected by all groups. In other words, we conclusively found that different types of XAI are preferred based on gender and educational background. In particular, we found that women's trust and understanding of the explanations did not vary significantly based on educational background, but that men's trust and understanding of explanations did statistically differ based on their backgrounds. Although the study focused on recommendation engines, the results most likely generalize to all XAI. In this paper we answer that question by reporting on trust and understanding in a laboratory-controlled experiment. We exposed users to an explainable recommendation system and gauged the effects of the explanations through a series of questions. We utilized three approaches to demonstrating the explanations: simple textual, technical textual, and visual explanations. We particularly focused on how peoples back-grounds (STEM [Science, Technology, Engineering, and Mathematics]) vs non-STEM and gender (male vs female) affected their trust and understanding of the explanations given by the recommendation system.