2018-19
Jonathon Hare & Ethan Harris, 17th Feb 2020
Learning how to optimise the learning algorithm is very important, particularly when working with large amounts of data and models with many parameters to learn. We have covered a few optimisation algorithms in the lecture, which can be used to improve standard gradient descent and stochastic gradient descent. Momentum is used to add the exponentially weighted average of the gradient of the cost function obtained using backpropagation, which almost always improves the performance of learning. We also looked briefly at the Adam algorithm, which used a combination of both RMSProp and Momentum to optimise learning.
Through this lab you’ll learn how to:
To work through this lab you’ll use the Python 3 language in a Jupyter Notebook environment, with the pytorch
tensor library. We will primarily be using Google Colab to run the notebooks as this gives you access to an environment with all the tools required. If you wish to run the notebooks locally, see the information in the section below.
You might need to refer to the “optimisation” lecture slides for this lab - you can get those here: http://comp6248.ecs.soton.ac.uk/lectures/optimisation.pdf.
The following is a list of the notebooks for this lab, with links to open directly in Google Colab (once opened you should immediately save a copy in your Google Drive otherwise anything you do will be lost once the browser closes), or to download locally. You should work through the notebooks in numeric order as they follow on from each other.
3.1 Function Optimisation | preview | download | |
3.2 Support Vector Machines | preview | download |
You’ll need access to a computer with the following installed:
Python
(>= 3.6)notebook
(>=5.4.1)pytorch
(>= 1.0.0)If you want to work on your own machine we recommend using the Anaconda python distribution. Running conda install pytorch torchvision -c pytorch
(see https://pytorch.org/get-started/locally/ for more options) will install both pytorch and torchvision (used in later labs).