There is growing interest in explanations as an ethical and technical solution to the problem of 'opaque' AI systems. In this essay we point out that technical and ethical approaches to Explainable AI (XAI) have different assumptions and aims. Further, the organizational perspective is missing from this discourse. In response we formulate key questions for explainable AI research from an organizational perspective: 1) Who is the 'user' in Explainable AI? 2) What is the 'purpose' of an explanation in Explainable AI? and 3) Where does an explanation 'reside' in Explainable AI? Our aim is to prompt collaboration across disciplines working on Explainable AI.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.