Practical: Applying and evaluating privacy-preserving techniques

Practical: Applying and evaluating privacy-preserving techniques

Administrative information


Title Defending against Membership and Attribute Inference Attacks in Machine Learning Models
Duration 90 min
Module B
Lesson Type Practical
Focus Ethical - Trustworthy AI
Topic Privacy Attacks on Machine Learning, Countermeasures

 

Keywords


Privacy of Machine Learning, Mitigation, Anonymization, Differential Privacy, Differentially Private Training, Random Forest,

 

Learning Goals


  • Gain practical skills to mitigate privacy leakages by applying Differential Privacy
  • How to anonymize datasets with Differential Privacy
  • How to train ML models with Differential Privacy
  • Understanding the difference between data anonymization and privacy-preserving model training
  • Study the trade-off between privacy preservation (anonymization) and utility (model quality, data accuracy)

Lesson Materials



The materials of this learning event are available under CC BY-NC-SA 4.0.

 

 

Instructions for Teachers


This laboratory exercise is a follow up of Practical: Auditing frameworks of privacy and data protection, where privacy attacks against ML models are developed, while this current learning event is about mitigating these attacks.

Machine learning models are often trained on confidential (or personal, sensitive) data. For example, such a model can predict the salary of an individual from its other attributes (such as education, living place, race, sex, etc.). A common misconception is that such models are not regarded as personal data even if their training data is personal (indeed, training data can be the collection of records about individuals), as they are computed from aggregated information derived from the sensitive training data (e.g., average of gradients in neural networks, or entropy/count of labels in random forests). The goal of this lab session is to show that machine learning models can be regarded as personal data and therefore its processing is very likely to be regulated in many countries (e.g., by GDPR in Europe). Students will design privacy attacks to test if the trained models leak information about its training data, and also mitigate these attacks. For example, membership inference attacks aim to detect the presence of a given sample in the training data of a target model from the models and/or its output. White-box attacks can access both the trained models (including its parameters) and the output of the model (i.e., its predictions), whereas black-box models can only access the predictions of the model for a given sample. Attribute inference attacks aim to predict a missing sensitive attribute from the output of the machine learning model that is trained on as well as all the other attributes.

Teachers are recommended to emphasize the trade-off between privacy-preservation and model quality/data accuracy in general. If necessary, extra exercises can be built into the syllabus to demonstrate this (evaluate model quality depending on epsilon and delta).

Outline

In this lab session, you will mitigate privacy risks fin AI models. Specifically, students will develop two mitigation techniques:

  1. Defense 1: generate synthetic data with the guarantees of Differential Privacy and check
    • how much the model quality degrades if the privacy-preserving synthetic data is used to train the model instead of the original data (depending on the privacy parameter epsilon)
    • if training on the synthetic data instead of the original one prevents membership and attribute inference attack
  2. Defense 2: train the model with Differential Privacy guarantees, and check
    • how much the model quality degrades if the privacy-preserving model is used to instead of the original model for prediction (depending on the privacy parameter epsilon)
    • if the privacy-preserving model prevents membership attack
    • how the accuracy of the privacy-preserving model changes compared to Defense 1

Students will form groups of two and work as a team. One group has to hand in only one documentation/solution.

More information

Click here for an overview of all lesson plans of the master human centred AI

Please visit the home page of the consortium HCAIM

Acknowledgements

The Human-Centered AI Masters programme was co-financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068.

The materials of this learning event are available under CC BY-NC-SA 4.0

 

The HCAIM consortium consists of three excellence centres, three SMEs and four Universities

HCAIM Consortium

  • Het arrangement Practical: Applying and evaluating privacy-preserving techniques is gemaakt met Wikiwijs van Kennisnet. Wikiwijs is hét onderwijsplatform waar je leermiddelen zoekt, maakt en deelt.

    Laatst gewijzigd
    2024-05-15 11:17:32
    Licentie

    Dit lesmateriaal is gepubliceerd onder de Creative Commons Naamsvermelding-GelijkDelen 4.0 Internationale licentie. Dit houdt in dat je onder de voorwaarde van naamsvermelding en publicatie onder dezelfde licentie vrij bent om:

    • het werk te delen - te kopiëren, te verspreiden en door te geven via elk medium of bestandsformaat
    • het werk te bewerken - te remixen, te veranderen en afgeleide werken te maken
    • voor alle doeleinden, inclusief commerciële doeleinden.

    Meer informatie over de CC Naamsvermelding-GelijkDelen 4.0 Internationale licentie.

    Aanvullende informatie over dit lesmateriaal

    Van dit lesmateriaal is de volgende aanvullende informatie beschikbaar:

    Toelichting
    .
    Eindgebruiker
    leerling/student
    Moeilijkheidsgraad
    gemiddeld
    Studiebelasting
    4 uur en 0 minuten

    Gebruikte Wikiwijs Arrangementen

    HCAIM Consortium. (z.d.).

    Acknowledgement

    https://maken.wikiwijs.nl/198386/Acknowledgement

    HCAIM Consortium. (z.d.).

    Lecture: Risk & Risk mitigation

    https://maken.wikiwijs.nl/200139/Lecture__Risk___Risk_mitigation

  • Downloaden

    Het volledige arrangement is in de onderstaande formaten te downloaden.

    Metadata

    LTI

    Leeromgevingen die gebruik maken van LTI kunnen Wikiwijs arrangementen en toetsen afspelen en resultaten terugkoppelen. Hiervoor moet de leeromgeving wel bij Wikiwijs aangemeld zijn. Wil je gebruik maken van de LTI koppeling? Meld je aan via info@wikiwijs.nl met het verzoek om een LTI koppeling aan te gaan.

    Maak je al gebruik van LTI? Gebruik dan de onderstaande Launch URL’s.

    Arrangement

    IMSCC package

    Wil je de Launch URL’s niet los kopiëren, maar in één keer downloaden? Download dan de IMSCC package.

    Meer informatie voor ontwikkelaars

    Wikiwijs lesmateriaal kan worden gebruikt in een externe leeromgeving. Er kunnen koppelingen worden gemaakt en het lesmateriaal kan op verschillende manieren worden geëxporteerd. Meer informatie hierover kun je vinden op onze Developers Wiki.