AEQUITAS: Assessment and Engineering of Equitable, Unbiased, Impartial and Trustworthy AI-systems
Research project
Unbiased AI made in Europe.
AEQUITAS is developing an experimental test environment to detect, diagnose and address injustice, bias, inequality and discrimination in AI systems. This new Horizon Europe project, funded by the European Commission, builds on a strong consortium of AI and domain experts, social scientists and representatives of minorities and at-risk groups.
AI-based decision support systems are increasingly used in industry, the public and private sectors and in decision-making. As our society faces a dramatic increase in inequalities and intersectional discrimination, we need to prevent AI systems from reinforcing this development.
Trustworthy AI
Fairness stands as one of the main principles of Trustworthy AI promoted at the EU level, but how to translate it into technical, functional, and lawful requirements in the AI system design is still an open question. Similarly, we don’t know how to test if a system is compliant with these principles and repair it in case it is not.
AEQUITAS proposes to design an environment to:
Assess bias in AI systems
Provide effective methods to reduce bias, and
Provide guidelines, methods, and software technology for fair design to design new bias-free systems
Real use cases in healthcare, human resources, and challenges for socially disadvantaged groups will test the experimental platform and demonstrate the effectiveness of the proposed solution.
Strong Consortium
This new Horizon Europe project, funded by the European Commission, relies on a strong consortium of AI and domain experts, social scientists, and representatives of minorities and at-risk groups.