Title |
Trust, Normativity and Model Drift | ![]() |
Duration | 45-60 | |
Module | C | |
Lesson Type | Lecture | |
Focus | Technical - Future AI | |
Topic | Open Problems and Challenges |
XAI,Ante-hoc,Post-hoc,SHAP,LIME,
None.
None.
None.
None.
The materials of this learning event are available under CC BY-NC-SA 4.0.
Using the tabular example in the notes, using both LIME and SHAP to examine all attributes for four other incorrectly classified instances to describe the predictions. $ Using CNNS and the LIME and SHAP explainability approaches for four other incorrectly classified instances to describe the predictions. $ For a text-based problem, identify four other incorrectly classified instances to describe the predictions and why they may have been incorrect. $ Sum up your efforts, determine if exercises meet all five XAI perspectives, and elaborate if they do.
Click here for an overview of all lesson plans of the master human centred AI
Please visit the home page of the consortium HCAIM
![]() |
The Human-Centered AI Masters programme was co-financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068. The materials of this learning event are available under CC BY-NC-SA 4.0
|
The HCAIM consortium consists of three excellence centres, three SMEs and four Universities |