Jump to contentJump to search

Date

Prof. Eva Schmitt (Univ. Dortmund): Reason-Giving XAI and Responsibility

Research Seminar Theoretical Philosophy Forschungsseminar

Abstract:

We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems.

We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic and control conditions on moral responsibility to argue that, in order to be responsible for her decision, the human in the loop has to have an explanation available of the system’s recommendation. Reason explanations are especially well-suited to this end and we examine whether – and how – it might be possible to make such explanations fit with AI systems. We support our claims by focusing on a case of disagreement between human in the loop and AI system.
 

Speaker:

Eva Schmidt is a Professor of Theoretical Philosophy - with a focus on epistemology and philosophy of action - at TU Dortmund. Previously she worked in Zurich as a member of a DACH project on The Structure and Development of Understanding Actions and Reasons. She is involved in the Saarbrücken/Dortmund-based project Explainable Intelligent Systems (EIS), which is supported by a grant from the Volkswagen Foundation. Schmidt has published in journals such as Noûs, Ethics, Philosophical Studies, and Artificial Intelligence. Her book Modest Nonconceptualism: Phenomenology, Epistemology, and Content was published with Springer in 2015.

ICS

Details

06.07.2021, 18:30 Uhr - 20:15 Uhr
Responsible for the content: