If you are not already registered with #frAIday, you can do so here to receive the Zoom link
#frAIday hybrid
The talk will be held in MIT.A.216 at Umeå University. You can also join us on Zoom. Welcome!
Abstract
Explainable AI (XAI) is a domain that develops methods that make is possibly to explain, or at least justify, recommendations and actions of AI systems in similar ways as humans do. However, current XAI methods tend to produce non-interactive results that are mainly (or only) understandable to the AI engineers themselves. Social XAI (sXAI) is proposed as a name for XAI functionality that mimics how humans explain and justify their actions and opinions at least to some degree. sXAI methods should be interactive and adapt their explanations to the explainee’s background knowledge, interests and preferences in how to receive explanations, as well as the pace of interaction. The presentation shows how sXAI can be implemented using the Contextual Importance and Utility (CIU) method – and gives some reasons for why we probably won’t see sXAI happening anytime soon.
If you are not already registered with #frAIday, you can do so here to receive the Zoom link
Kary Främling is Professor in data science, at Umeå University, with emphasis in data analysis and machine learning. He is also head of explainable AI (XAI) team and describes his core research focus on Explainable Artificial Intelligence (XAI) and notably on so-called "outcome explanation", i.e. explaining and/or justifying results, actions or recommendations made by any kind of AI systems, including (deep or not) neural networks.
This website uses cookies which are stored in your browser. Some cookies are necessary for the page to work properly and others are selectable. You choose which ones you want to allow.