The module Deployment (Module B) focuses on the second phase of the MLOps development cycle; the deployment. After the data exploratory phase of modelling (see Module A – Modelling), comes the integration of the ML solution into the business systems. It is now important to start thinking about the ML architecture and how it plays together with the existing systems (legacy). To experience real benefit from automated ML solutions, pipelines need to be introduced; on the one hand, to be able to deal with continuous and live data supplies (stream processing), and on the other hand, to link the results of the ML model to other systems.
Moreover, Module B enhances the complexity of AI technology by moving towards (the use of) neural networks and deep learning. A major advantage of these more complex models is that they are more flexible and versatile than the techniques introduce in Module A – Modelling. However, the important disadvantages of these techniques are that they are more complex (to understand and configure) and opaquer. Therein lies an important ethical dilemma in the use of (advanced) AI techniques: how do you still understand what the AI solution calculates and whether this is done in the right way. Making the deployment of AI solutions more transparent and being able to determine the possible risks and mitigate these risks are important (social) themes in this module.
The student assesses the possible choices for the integration of an advanced AI technique, such as Deep and/or Reinforcement Learning, and authors an one-page report, based on a prototype that has been developed taking into account the limitations of and influences on the existing ICT systems and data facilities of the customer, which have been obtained in collaboration with, for example, ICT architects or developers.
|
The student assesses the potential risk involved and tests the degree of transparency (includien interpretability, reproducibility and explainibility) of a chosen AI/ML implementation and designs solutions using techniques that increase insight and transparency among Stakeholders (so called Explainable AI (XAI) techniques) to remedy shortcomings in this respect compared to the social and customer specific requirements. |
The student formulates a research design for a scientifically sound (practic oriented) research project related to a company case by formulating a relevant, consistent, functional research question, considering the applied research methods to be used, and establishing a precise, relevant and critical theoretical framework. |
||
LEARNING OUTCOME 1 | LEARNING OUTCOME 2 | LEARNING OUTCOME 3 |
Click here for an overview of all lesson plans of the master human centred AI
Please visit the home page of the consortium HCAIM
![]() |
The Human-Centered AI Masters programme was co-financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068. The materials of this learning event are available under CC BY-NC-SA 4.0
|
The HCAIM consortium consists of three excellence centres, three SMEs and four Universities |