Research project
AI systems are increasingly expected to act autonomously. Using current Machine Learning approaches that focus on pattern matching and rely heavily on correlation methods leads to impenetrable systems that are notoriously difficult to monitor (the so-called black box algorithms).
The project focuses on development of methods to verify and monitor the ethical behavior of AI systems based on the observation of input and output behavior according to a continuously evolving societal optimum.