Administrative information
Title |
Hyperparameter tuning |
|
Duration |
60 min |
Module |
B |
Lesson Type |
Lecture |
Focus |
Technical - Deep Learning |
Topic |
Hyperparameter tuning
|
Keywords
Hyperparameter tuning, activation functions, loss, epochs, batch size,
Learning Goals
- Investigate effects on capacity and depth
- Experient with varying epochs and batch sizes
- Trial different activation functions and learning rates
Expected Preparation
Learning Events to be Completed Before
Obligatory for Students
None.
Optional for Students
None.
References and background for students:
- John D Kelleher and Brain McNamee. (2018), Fundamentals of Machine Learning for Predictive Data Analytics, MIT Press.
- Michael Nielsen. (2015), Neural Networks and Deep Learning, 1. Determination press, San Francisco CA USA.
- Charu C. Aggarwal. (2018), Neural Networks and Deep Learning, 1. Springer
- Antonio Gulli,Sujit Pal. Deep Learning with Keras, Packt, [ISBN: 9781787128422].
Recommended for Teachers
None.
Lesson Materials
The materials of this learning event are available under CC BY-NC-SA 4.0.
Instructions for Teachers
This lecture will introduce students to the fundamentals of the hyperparameter tuning. We will use the Census Dataset as the examples of the use and outcomes from tuning various hypermeters. The Adult Census dataset is a binary classification problem. More on this dataset in the corresponding tutorial. The goal of this lecture is to introduce several hyperparameters with examples of how modifying these hyperparameters may aid or hinder learning. In addition we provide examples of under and overfitting, nose and performance gains (training time and in some cases accuracy/loss) when each of the hyperparameters are tunned. We will use diagnostic plots to evaluate the effect of the hyperparameter tunning and in particular a focus on loss, where it should be noted that the module we use to plot the loss is matplotlib.pyplot, thus the axis are scaled. This can mean that significant differences may appear not significant or vice versa when comparing the loss of the training or test data. In addition some liberties for scaffolding are presented, such as the use of Epochs first (almost as a regularization technique) while keeping the Batch size constant. Ideally these would be tunned together, but for this lecture they are separated.
Outline
Time schedule
Duration (Min) |
Description |
5 |
Overview of the data |
10 |
Capacity and depth tunning (under and over fitting) |
10 |
Epochs (under and over training) |
10 |
Batch sizes (for noise suppression) |
10 |
Activation functions (and their effects on performance - time and accuracy) |
10 |
Learning rates (vanilla, LR Decay, Momentum, Adaptive) |
5 |
Recap on the forward pass process |
More information
Click here for an overview of all lesson plans of the master human centred AI
Please visit the home page of the consortium HCAIM
Acknowledgements
|
The Human-Centered AI Masters programme was co-financed by the Connecting Europe Facility of the European Union Under Grant №CEF-TC-2020-1 Digital Skills 2020-EU-IA-0068.
The materials of this learning event are available under CC BY-NC-SA 4.0
|
The HCAIM consortium consists of three excellence centres, three SMEs and four Universities
|