Master thesis- AI Information Leakage: From Attacks to Impact with Subject-Centric Risk
Artificial intelligence is transforming society. AI Sweden is the national center for applied artificial intelligence and our mission is to accelerate the use of AI for the benefit of our society, our competitiveness, and for everyone living in Sweden. We drive impactful initiatives in areas such as healthcare, energy, and the public sector while pushing the boundaries of AI research and innovation in fields such as natural language processing and edge learning. Join us in harnessing the untapped value of AI to drive innovation and create sustainable value for Sweden.
We are now looking for a master thesis student to join our team.
Introduction
The growing deployment of machine learning (ML) models across domains such as healthcare, finance, and public services has intensified concerns about the privacy of individuals whose data fuels these models. While recent privacy auditing efforts have focused on quantifying the likelihood of adversarial success in attacks such as membership inference or model inversion, far less attention has been paid to the human consequences of a successful attack. Yet, regulatory frameworks such as the GDPR explicitly frame privacy risk in terms of potential harm to data subjects’ rights and freedoms.
This thesis aims to close that gap by designing and implementing a data-subject–aware privacy risk evaluation framework. The framework will extend the quantitative methodology underlying LeakPro \cite{leakpro} by coupling per-attack success probabilities with a structured harm model. Instead of only asking “How likely is this attack?”, it will answer “If this attack happens, how might it affect a person’s life?”, and express the result as a distribution of expected individual harm.
This question matters deeply because current privacy auditing methods treat users as abstract data points rather than as individuals who may face tangible harm, anging from discrimination and financial loss to stigmatization or loss of trust. This thesis is tackling one of the most urgent and underexplored challenges in applied AI.
Project Background and Problem Statement
AI Sweden is leading an initiative to develop open-source tools such as LeakPro to assess information-leakage risks in ML models in collaboration with partners including RISE, Sahlgrenska, Region Halland, and AstraZeneca. Current modules of LeakPro estimate the exploitability of various attacks (membership inference, model inversion, gradient inversion, synthetic-data attacks), but they do not quantify the downstream consequences for affected individuals.
This thesis closes that gap by adding a data-subject–centric risk layer to LeakPro.Building on established notions of severity scales and harm trees, it will integrate:
(i) per-attack success probabilities calibrated from empirical audits;
(ii) explicit harm sets per attack type with severities and dimension weights (e.g., dignity, safety, economic loss); and
(iii) harm trees to propagate uncertainty and dependence between attacks.
The resulting framework will enable privacy engineers to compute not only mean expected harm for each subject or cohort, but also risk-averse metrics. This supports decision-making that prioritizes mitigations based on actual human impact rather than solely technical exploitability.
Outline
The objectives of this project are outlined below.
1. Literature study of harm-based privacy risk models: Survey existing privacy risk assessment approaches and how they operationalize harm; review attack types and metrics used in LeakPro.
2. Design of a preliminary evaluation approach: Formulate and refine a methodology for combining attack measurements with harm models, including how to define cohorts, harms, severities, and uncertainty propagation. This approach will be iteratively improved during the thesis.
3. Prototype and evaluation: Implement a proof-of-concept tool (potentially as a LeakPro extension) to test the approach on at least one real or synthetic ML application. Explore how the tool can produce metrics such as mean expected harm and risk-averse indicators. If time permits, the student may also contribute harm-taxonomy modules or visual risk dashboards to the open-source LeakPro platform.
Contact
Fazeleh Hoseini: fazeleh.hoseini@ai.se
Why work for AI Sweden?
To us, artificial intelligence is not only about tech, it’s a force for positive societal change. You'll be working alongside leading AI experts, scientists, journalists, linguists, policy professionals, entrepreneurs, change leaders, and many more. To work here, you don’t need to know “everything” about AI, but you need to believe in its potential to help shape our society for the better.
As an organization, we’re uniquely positioned at the sweet spot of governmental influence and startup agility. Small enough to stay adaptive and have fun but backed by and in close contact with both the government, academia and private and public sector.
Join us to make a real-world impact by contributing to initiatives that benefit society and tackle critical challenges. Be at the forefront of AI innovation, working with cutting-edge technologies and playing a key role in shaping the future of AI in Sweden.
And, within our mission, we can most certainly be a platform empowering you to realize your ideas. AI Sweden’s ability to empower partners and individual team members to do exceedingly well in their profession is a key success factor for driving positive and significant impact.
In short, we like to believe we offer our team members a place to grow, an environment for personal development.
An equal and fair working environment
We strongly believe in diversity and inclusion and are acutely aware of the skewed gender balance in our industry. We actively strive to put together a diverse team in terms of age, gender and background.
AI Sweden does not accept unsolicited support and kindly ask not to be contacted by any advertisement agents, recruitment agencies or manning companies.
- Organization
- AI Labs
- Role
- Engineering
- Locations
- Göteborg

About AI Sweden
As Sweden's national center for applied AI, we're on a mission to accelerate the use of AI to benefit our society, our competitiveness, and everyone living in Sweden. We drive impactful initiatives in areas such as healthcare, energy, and public services while pushing the boundaries of AI research in fields such as natural language processing and machine learning. Join us in harnessing the untapped value of AI to drive innovation and create sustainable value for Sweden.