"False"
Skip to content
printicon
Main menu hidden.
Published: 2024-05-20

New advanced AI solutions will stop intruders accessing and leaking your data

NEWS The private and public sectors both use and share data. But what happens if systems are hacked and personal data or sensitive information is leaked? At Umeå University, an entirely new security approach to data-driven decision-making is being developed. "We will make it really difficult for attackers to identify people, buildings or areas for further dissemination," says Professor Vicenç Torra, at the Department of Computer Science, Umeå University.

The public sector collects large amounts of data. This can range from the location of water pipes to childcare needs, income inequalities, criminality, climate change or subsidies; all important information for society. The EU estimates the value of open data to be between €194-334 billion by 2025.

In the private sector, data and AI are used to analyse purchasing behaviour, or make informed decisions and offer personalised solutions. In the public sector, open data is a prerequisite for developing solutions that can help in healthcare, or for example, help you with information about the water quality in your municipality's bathing lakes. The public sector also needs to share and utilise each other's data to create new value, knowledge and innovation. "However, we can see that security protections are still insufficient and thus there is a high risk that sensitive or personal information can be disclosed," says Vicenç Torra, Professor in Computing Science at Umeå University.

Supported by WASP and the Swedish Research Council

He is a world-leading researcher in the field of AI and data privacy and active as a WASP professor at the Department of Computing Science, Umeå University. WASP, Wallenberg AI, Autonomous Distributed and Software Program, is Sweden's largest research programme in the field, where Umeå is one of five partner universities. With generous funding also from the Swedish Research Council, Sweden's largest government research funder, he is now developing AI solutions that provide completely new security opportunities.
"Instead of repairing and stopping potentially very large leaks, we are now developing advanced protection mechanisms that are integrated into the data-driven models already at the prototype stage. This makes it possible to train directly private models, which ultimately makes attacks more difficult," says Professor Vicenç Torra.

Not sufficient solutions

Data leaks can cause enormous damage and often involve huge costs. We have seen this, not least, in recent years. And although many authorities and companies are working hard to ensure a high level of security, Torra means that it is often possible to recreate or assemble information and thus reveal or use sensitive information. Specifically, there are problems with protecting data that has complicated or unclear relationships between different objects and information that is temporary, as well as dynamic graphs and measurement data, which currently have no protection mechanisms at all. "For example, it is more difficult to protect privacy when there is data concerning several people on the same address or data on treatments and how they have affected a patient's health over time, for instance, if a patient has medical records from several hospitals. It can also involve metering data from a power grid, where it is possible to identify who is using it," says Vicenç Torra.

Damage trust

The dissemination of data can lead to widespread problems. An intruder can use the information to draw conclusions about you as an individual or company and then disseminate, distort or sell that information. "Not only will this damage the trust of customers and the credibility of researchers it can also be directly harmful to individuals or groups. For example, by revealing which people are suffering from a disease in an area, blackmail, as well as threats and danger," says Professor Vicenç Torra.

World-leading research

This isn't the only data privacy project Vicenç Torra is working on. He also leads the Nausica research group, where PhD and post-doctoral students are working to build transparent and privacy-aware AI systems, focusing on data integrity for computing and privacy-aware machine learning, aiming to build models and data analytics as well as decision-making models to make decisions. "The new and innovative solutions will hopefully minimise damage, not least in terms of the increased risk of hacker attacks. We will make it really difficult to track and identify users when such an attack occurs," said Vicenç Torra.
More information about Nausica can be found here.

Contact Information

Professor Vicenç Torra is a world-leading researcher in AI, data privacy, approximate reasoning and decision making. Vicenç Torra leads the research group Nausica: Privacy-Aware Transparent Decisions Group at the Department of Computing Science, Umeå University. The research project "Privacy for Complex Data" runs until summer 2025 and is funded by the Swedish Research Council.