Formal Methods for Trustworthy Hybrid Intelligence
Research group
Hybrid intelligence emerges from the collaboration between humans and intelligent systems, with the aim of enabling humans to achieve their goals more effectively. This collaboration has numerous practical applications, such as police officers using digital sources of information to make informed decisions in risky situations, nurses improving their medication management procedures, etc.
Within this group we envision intelligent systems that give formal guarantees in their behavior. Hence, we aim to develop formal methods for designing, building, and testing trustworthy AI systems. We are committed to bringing the vision of trustworthy AI into a reality in different research, education, and industry sectors. Hence, the envisioned intelligent systems do not replace humans, but rather as a means of empowering them to achieve their goals.
Autonomous rational agents with humans in the loop
Cognitive models for rational agents, e.g., extensions of the BDI model.
Decision-making models.
Strategic interaction models based on logic-based methods.
Human-aware planning based on logic-based methods.
Trustworthy and accountable AI system behaviors
AI testing
Trustworthy AI assessment tools.
Assessment of AI systems from the AI Ethics view.
Knowledge modeling
Theory of mind models based on logic-based methods.
Knowledge elicitation.
Software tools to develop intelligent interactive systems, e.g., software libraries to develop intelligent systems based on logic-based languages in Unity.