Research group
The social context in which AI systems function becomes more important when the AI systems are becoming more autonomous and take more important decisions.
We are investigating computational models of social concepts like norms, practices, organizations, etc. to enhance AI systems in a way that they can be aware of their social context and can behave as people expect them to do. E.g. we expect a care robot to insist to a patient to take his medicine, but not when he is just on the phone with his partner. And when we interact with a chatbot to apply for a licence to hunt moose it should be able to explain why a licence is not given.
We apply the developed theory in numerous applications, ranging from chatbots and social robotics (like in the examples above) to social simulations for policy makers and strategic organizational decisions.
HHAI 2024: hybrid human AI systems for the social good: proceedings of the third international conference on hybrid human-artificial intelligence, Amsterdam: IOS Press 2024 : 114-123
Advances in social simulation: proceedings of the 18th social simulation conference, Glasgow, UK, 4–8 september 2023, Cham: Springer Nature 2024 : 163-176
Advances in social simulation: proceedings of the 18th Social simulation conference, Glasgow, UK, 4–8 september2023, Cham: Springer Nature 2024 : 533-545
Advances in social simulation: proceedings of the 18th Social simulation conference, Glasgow, UK, 4–8 september 2023, Cham: Springer Nature 2024 : 235-248