Title | Privacy in Machine Learning | ![]() |
Duration | 90 min | |
Module | B | |
Lesson Type | Lecture | |
Focus | Ethical - Trustworthy AI | |
Topic | Privacy |
Adversary models,Training data extraction,Membership attack,Model extraction,
None.
The materials of this learning event are available under CC BY-NC-SA 4.0.
This course provides a general introduction to different confidentiality issues of Machine learning. Teachers are recommended to use real-life examples to demonstrate the practical relevance of these vulnerabilities especially for privacy-related issues whose practical relevance is often debated and considered as an obstacle to human development. Students must understand that privacy risks can also slow down progress (parties facing confidentiality risks may be reluctant to share their data). It focuses on the basic understanding needed to recognize privacy threats for the purpose of auditing machine learning models. Related practical skills can be further developed in more practical learning events:
Duration (min) | Description | Concepts |
---|---|---|
20 | Machine Learning: Recap | Learning algorithm, Classification, Neural networks, Gradient descent, confidence scores |
5 | Adversary models | White-box, Black-box attacks |
20 | Membership attack | Target model, Attacker model, Differential Privacy |
20 | Modell inversion | Gradient descent with respect to input data, reconstruction of class average |
20 | Model extraction | Re-training, parameter reconstruction, mitigations |
5 | Conclusions |
Click here for an overview of all lesson plans of the master human centred AI
Please visit the home page of the consortium HCAIM
![]() |
The Human-Centered AI Masters programme was co-financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068. The materials of this learning event are available under CC BY-NC-SA 4.0
|
The HCAIM consortium consists of three excellence centres, three SMEs and four Universities |