NAUSICA: PrivAcy-AWare traNSparent deCIsions group
Research group
We are interested in privacy-aware transparent AI systems. We focus on data privacy for data processing, privacy-aware machine learning for building models and data analytics, and decision models for making decisions.
AI systems are increasingly used for enhancing decision-making. One of the main building blocks of AI is data. Machine and statistical learning methods are used to extract knowledge from the underlying data in terms of models and inferences. The typical workflow comprises of feeding pre-processed data into machine learning algorithms, followed by algorithms transforming data into models, and finally, the models are embedded into AI systems for decision-making.
Important privacy issues
Machine learning and AI have spread into numerous domains where sensitive personal data are collected from users. Domains like healthcare, personal financial services, social networking, e-commerce, location services and recommender systems are some of these domains. Data from these domains are continuously collected and analysed to derive useful decision and inferences. However, the sensitive nature of these data raises privacy concerns that cannot be successfully addressed through naive anonymization alone.
Not only data, but models and aggregates can also lead to disclosure as they can contain traces of the data used in their computations. Attacks on data (e.g., reidentification and transparency attacks) and on models (e.g., membership attacks, model inversion) have proven the need for appropriate protection mechanisms.
The NAUSICA research group develops techniques so that data, models, and decisions are made with appropriate privacy guarantees. AI systems must be able to handle uncertainty in order to be used in the real world, where ambiguity, vagueness and randomness are rarely absent. Approximate reasoning studies models of reasoning that deal with uncertainty, such as probability-based, proof theory-based and fuzzy set-based models.
Responsible AI
AI systems, in line with trustworthy AI guidelines, have as fundamental requirements fairness, accountability, explainability, and transparency. Requirements affect the whole design and building process of AI systems, from data to decisions. Data privacy, machine and statistical learning, and approximate reasoning models are basic components of this process, but they need to be combined to provide a holistic solution.
Our research group is interested in privacy-aware transparent AI systems. We want to understand the fundamental principles that permit us to build these systems, and develop algorithms for this purpose. We focus on data privacy for data processing, privacy-aware machine learning for building models and data analytics, and decision models for making decisions.
International research collaborations
The group collaborates with several national and international research groups at Tamagawa University, Osaka University, and Tsukuba University in Japan, as well as Maynooth University in Ireland and Universitat Autònoma de Barcelona, and has links with industry and governmental organisations.
Some keywords of our research
Data privacy and machine learning
Privacy-aware machine and statistical learning methods
Privacy-aware federated learning
Disclosure risk assessment
Transparency attacks
Privacy models (privacy for reidentification, k-anonymity, differential and integral privacy)
"Understanding and designing Block Copolymers using AI." Funded by Kempestiftelserna, (2024-2027)
Appropriate automation: towards an understanding of robots and AI in the social services from an organizational and user perspective, Forte, (2021-2027)
CyberSecIT: Automated and Autonomous Cybersecurity for IoT, funded by WASP, (2022-2027)