Master thesis-AI Information Leakage: From Attacks to Impact with Subject-Centric Risk
Artificial intelligence is transforming society. AI Sweden is the national center for applied artificial intelligence and our mission is to accelerate the use of AI for the benefit of our society, our competitiveness, and for everyone living in Sweden. We drive impactful initiatives in areas such as healthcare, energy, and the public sector while pushing the boundaries of AI research and innovation in fields such as natural language processing and edge learning. Join us in harnessing the untapped value of AI to drive innovation and create sustainable value for Sweden.
We are now looking for a master thesis student to join our team.
Introduction
The growing deployment of machine learning (ML) models across domains such as healthcare, finance, and public services has intensified concerns about the privacy of individuals whose data fuels these models. While recent privacy auditing efforts have focused on quantifying the likelihood of adversarial success in attacks such as membership inference or model inversion, far less attention has been paid to the human consequences of a successful attack. Yet, regulatory frameworks such as the GDPR explicitly frame privacy risk in terms of potential harm to data subjects’ rights and freedoms.
This thesis aims to close that gap by designing and implementing a data-subject–aware privacy risk evaluation framework. The framework will extend the quantitative methodology underlying LeakPro by coupling per-attack success probabilities with a structured harm model. Instead of only asking “How likely is this attack?”, it will answer “If this attack happens, how might it affect a person’s life?”, and express the result as a distribution of expected individual harm.
This question matters deeply because current privacy auditing methods treat users as abstract data points rather than as individuals who may face tangible harm, anging from discrimination and financial loss to stigmatization or loss of trust. This thesis is tackling one of the most urgent and underexplored challenges in applied AI.
Project Background and Problem Statement
AI Sweden is leading an initiative to develop open-source tools such as LeakPro to assess information-leakage risks in ML models in collaboration with partners including RISE, Sahlgrenska, Region Halland, and AstraZeneca. Current modules of LeakPro estimate the exploitability of various attacks (membership inference, model inversion, gradient inversion, synthetic-data attacks), but they do not quantify the downstream consequences for affected individuals.
This thesis closes that gap by adding a data-subject–centric risk layer to LeakPro.Building on established notions of severity scales and harm trees, it will integrate:
(i) per-attack success probabilities calibrated from empirical audits;
(ii) explicit harm sets per attack type with severities and dimension weights (e.g., dignity, safety, economic loss); and
(iii) harm trees to propagate uncertainty and dependence between attacks.
The resulting framework will enable privacy engineers to compute not only mean expected harm for each subject or cohort, but also risk-averse metrics. This supports decision-making that prioritizes mitigations based on actual human impact rather than solely technical exploitability.
Outline
The objectives of this project are outlined below.
1. Literature study of harm-based privacy risk models: Survey existing privacy risk assessment approaches and how they operationalize harm; review attack types and metrics used in LeakPro.
2. Design of a preliminary evaluation approach: Formulate and refine a methodology for combining attack measurements with harm models, including how to define cohorts, harms, severities, and uncertainty propagation. This approach will be iteratively improved during the thesis.
3. Prototype and evaluation: Implement a proof-of-concept tool (potentially as a LeakPro extension) to test the approach on at least one real or synthetic ML application. Explore how the tool can produce metrics such as mean expected harm and risk-averse indicators. If time permits, the student may also contribute harm-taxonomy modules or visual risk dashboards to the open-source LeakPro platform.
Contact
Fazeleh Hoseini: fazeleh.hoseini@ai.se
In order to comply with all applicable immigration and export control regulations, we are limiting this opportunity to students with permanent residents of EU, Norway, Switzerland, Iceland, India, the UK, Canada, USA, Mexico, Japan, and South Korea.
AI Sweden does not accept unsolicited support and kindly ask not to be contacted by any advertisement agents, recruitment agencies or manning companies.
- Organization
- AI Labs
- Role
- Engineering
- Locations
- Göteborg
About AI Sweden
As Sweden's national center for applied AI, we're on a mission to accelerate the use of AI to benefit our society, our competitiveness, and everyone living in Sweden. We drive impactful initiatives in areas such as healthcare, energy, and public services while pushing the boundaries of AI research in fields such as natural language processing and machine learning. Join us in harnessing the untapped value of AI to drive innovation and create sustainable value for Sweden.