#frAIday: Social Explainable AI - What is it and how can we make it happen?
Fri
27
Sep
Friday 27 September, 2024at 12:15 - 13:00
MIT.A.216 & Zoom
#frAIday hybrid The talk will be held in MIT.A.216 at Umeå University. You can also join us on Zoom. Welcome!
Abstract Explainable AI (XAI) is a domain that develops methods that make is possibly to explain, or at least justify, recommendations and actions of AI systems in similar ways as humans do. However, current XAI methods tend to produce non-interactive results that are mainly (or only) understandable to the AI engineers themselves. Social XAI (sXAI) is proposed as a name for XAI functionality that mimics how humans explain and justify their actions and opinions at least to some degree. sXAI methods should be interactive and adapt their explanations to the explainee’s background knowledge, interests and preferences in how to receive explanations, as well as the pace of interaction. The presentation shows how sXAI can be implemented using the Contextual Importance and Utility (CIU) method – and gives some reasons for why we probably won’t see sXAI happening anytime soon.
Kary Främling is Professor in data science, at Umeå University, with emphasis in data analysis and machine learning. He is also head of explainable AI (XAI) team and describes his core research focus on Explainable Artificial Intelligence (XAI) and notably on so-called "outcome explanation", i.e. explaining and/or justifying results, actions or recommendations made by any kind of AI systems, including (deep or not) neural networks.