Administrative information
Title |
Cutting Edge XAI |
|
Duration |
60 |
Module |
B-opt |
Lesson Type |
Lecture |
Focus |
Ethical - Trustworthy AI |
Topic |
General Explainable AI |
Keywords
None.
Learning Goals
- Student knows about cutting-edge (academic) XAI tools, methods, and mindsets.
- Student can get inspiration to apply XAI to their own project.
- Student understand main challenges of (future) use of XAI.
Expected Preparation
Learning Events to be Completed Before
Obligatory for Students
None.
Optional for Students
None.
References and background for students:
- Rudin, C. (2019). Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5), 206-215.
- Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial intelligence, 267, 1-38.
- Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160.
- Extra: Nauta, M., van Bree, R., & Seifert, C. (2021). Neural prototype trees for interpretable fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14933-14943).
- Extra: van der Waa, J., Nieuwburg, E., Cremers, A., & Neerincx, M. (2021). Evaluating XAI: A comparison of rule-based and example-based explanations. Artificial Intelligence, 291, 103404.
Recommended for Teachers
- Rudin, C., Chen, C., Chen, Z., Huang, H., Semenova, L., & Zhong, C. (2022). Interpretable machine learning: Fundamental principles and 10 grand challenges. Statistics Surveys, 16, 1-85.
Lesson Materials
The materials of this learning event are available under CC BY-NC-SA 4.0.
Instructions for Teachers
Picture on prototypes to couple deep learning with understandable explainations. From: Nauta, M., van Bree, R., & Seifert, C. (2021). Neural prototype trees for interpretable fine-grained image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 14933-14943).
Outline
- Overview of current XAI approaches and their shortcomings (15mins)
- Limitations of SHAP (and by extension LIME).
- Limitations in terms of understandability of XAI for end users.
- 10 challenges posed by Rudin et al. (30mins)
- Sparse logical models
- Scoring systems
- Generalized additive models
- Modern case-based reasoning
- Supervised and unsupervised disentanglement of neural networks
- Dimension reduction for data visualization
- Machine learning models that incorporate physics/causality
- Choosing “Rashomon” set of good models
- Interpretable reinforcement learning
- Reflection on when not to use XAI techniques but opt for simpler models. (15 mins)
- E.g., Explainable Boosting Machines (part of InterpretML package)
More information
Click here for an overview of all lesson plans of the master human centred AI
Please visit the home page of the consortium HCAIM
Acknowledgements
|
The Human-Centered AI Masters programme was co-financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068.
The materials of this learning event are available under CC BY-NC-SA 4.0
|
The HCAIM consortium consists of three excellence centres, three SMEs and four Universities
|