Title | Explainable AI for end-users | ![]() |
Duration | 60 mins | |
Module | B-opt | |
Lesson Type | Lecture | |
Focus | Ethical - Trustworthy AI | |
Topic | General Explainable AI |
Explainable AI,
None.
References and background for students:
None.
None.
The materials of this learning event are available under CC BY-NC-SA 4.0.
Practical laboratory that provides learners with a well understood data set and requires them to apply a machine learning algorithm for classification. Learners will then have to select a post-hoc explainability technique (ICE, DeepLIFT, LIME, SHAP).
Students will have to apply a relevant Explainable AI technique (or methodology) on their specific project/model, and report why their approach is suited and which (ethical) problems it addresses.
Trustworthy AI is a wider concept that just applying (post hoc) XAI techniques. For Trustworthy AI, the primary model should properly be understood (before resorting to XAI tools) (non post-hoc) approach of "explainability" is also be to visualize (explain) the effect of model thresholds on e.g. fairness.
[[Review status::I think this LE can be deleted. There is a lot of overlap with LE156/LE157 (the practicals) and the other two lectures on this topic, LE154, LE158). [OurenKuiper]
This LE is a "lecture", but the description talks about a " practical laboratory"...
!!! Many important fields are empty (e.g. goals, keywords, lesson material, keywords); outline is missing!!!| ]]
Click here for an overview of all lesson plans of the master human centred AI
Please visit the home page of the consortium HCAIM
![]() |
The Human-Centered AI Masters programme was co-financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068. The materials of this learning event are available under CC BY-NC-SA 4.0
|
The HCAIM consortium consists of three excellence centres, three SMEs and four Universities |