Image: AdobeStock
Research group The research group in Responsible AI was established to study the ethical and societal impact of AI, while supporting policymakers through the development of tools and methodologies to mitigate adverse effects.
We study the ethical and societal impact of AI, through the development of tools and methodologies design, monitor, and develop trustworthy AI systems and applications.
Our research is not only about the development of intelligent systems, but also in understanding the effects of their deployment on our societies. We are working to ensure the ethical application of Artificial Intelligence (AI), both through public engagement and frequent interaction with policymakers, and by facilitating the engineering of Responsible AI.
Our diverse multidisciplinary research programme aims to help all relevant actors to have access to the means and tools to develop, deploy, operate, and govern systems, while taking any ethical, legal, and socio-economics implications into consideration.
Oliver Larsson new representative in the Royal Swedish Academy of Engineering Sciences' Student Council.
On October 17-18, TAIGA held a symposium on AI and complex societal problems.
Personal data can now be kept more securely than before, that is the finding of Saloni Kwatras thesis.
Virginia Dignum is working to develop AI systems that are reliable and adapted to human values.
Detecting and preventing false information about Ukraine.
Towards the responsible implementation and use of artificial intelligence.