COMP6248 Differentiable Programming (and Deep Learning)

2021-22


Maintained by Dr Kate Farrahi & Dr Jonathon Hare.

Contents

Welcome

Welcome to the homepage for the ECS COMP6248 Deep Learning module.

Differentiable Programming and Deep learning has revolutionised numerous fields in recent years. We’ve witnessed improvements in everything from computer vision through speech analysis to natural language processing as a result of the advent of cheap GPGPU compute coupled with large datasets and some neat algorithms. More broadly, the idea of ‘Differentiable Programming’, in which we define entire programs as compositions of differentiable operations which can then be optimised to fit data, looks to become a new norm in how we utilise computers.

This module will look at how differentiable programming works, from theoretical foundations right through to practical implementation. We’ll study key aspects such as automatic differentiation, look at models for deep learning such as convolutional and recurrent neural networks, as well as considering current research in depth. Along the way we’ll also look at aspects of biology and neuroscience, and see how ideas from these fields feed-in to current research.

The overall aim of this module is not to teach you to be able to train pre-existing models (although you will learn to do that!), but rather to equip you with the fundamental skills to be able to understand and implement models and ideas that are currently being developed by researchers. We intend to equip you with the knowledge needed to understand new ideas as they are published, and give you the ability to constructively criticise, and identify limitations, of different approaches.

As a word of warning, this is a mathematical module: the predominant focus is on looking at models that can be optimised via gradient methods. You need to have a good grasp of linear (matrix) algebra and matrix calculus, as well as the fundamentals of machine learning, probability and statistics. You will also necessarily be comfortable with Python programming and the use of numeric/matrix libraries such as numpy or pytorch.

Lectures and assigned reading

This year the lectures for this course will be given by Dr Kate Farrahi (email) and Dr Jonathon Hare (email). We have a capable team of PhD students to facilitate the lab sessions.

There will be two lectures each week, one on Mondays at 3pm, and the other on Thursdays at 2pm. The lectures will all take place in person, though the labs will take place virtually over Teams.

By taking part in this module we expect you to turn up to the lectures and get involved - asking questions and provocking discussion is positively encouraged.

The current working timetable/plan is below, and illustrates the topics we intend to cover, but this will evolve as the course progresses. Many of the lectures are coupled with assigned reading materials that you should read before the lecture takes place. This will broaden your understanding of the topic whilst giving you the skills required to read and understand the key points from recent research literature. The lectures are broadly broken into three groups: fundamentals (weeks 1-4), architectures/models (weeks 5-8), and advanced topics (weeks 9-12).

Week Date Location Topic Handouts Reading Material Lecture Video  
1 31-Jan 54-4011 Intro and background intro-handouts.pdf   Panopto link  
  03-Feb 7-3009 Review of fundamentals mlreview-handouts.pdf CH 3 of Michael Nielsen’s Book Panopto link  
2 07-Feb 54-4011 The Power of Differentiation differentiate-handouts.pdf   Panopto Link  
  10-Feb 7-3009 Optimisation optimisation-handouts.pdf Adam: A Method for Stochastic Optimization Panopto link  
3 14-Feb 54-4011 Backpropagation backprop-handouts.pdf Learning representations by back-propagating errors    
  17-Feb 7-3009 Automatic Differentiation autograd-handouts.pdf Automatic differentiation in PyTorch Panopto Link  
4 21-Feb 54-4011 Deeper Networks: Universal approximation, overfitting and regularisation deepnetworks-handouts.pdf Dropout:A Simple Way to Prevent Neural Networks from Overfitting Panopto Link  
  24-Feb 7-3009 Embeddings Embeddings-handout.pdf Efficient Estimation of Word Representations in Vector Space Panopto Link  
5 28-Feb 54-4011 A Biological Perspective biological-inspiration-handouts.pdf How Convolutional Neural Network Architecture Biases Learned Opponency and Color Tuning Panopto Link  
  03-Mar 7-3009 Convolutional Networks Convolution-handouts.pdf handwritten digit recognition with a back-propagation network Panopto Link  
6 07-Mar 54-4011 Networks Architectures for image classification Architectures-handouts.pdf ImageNet Classification with Deep Convolutional Neural Networks, Striving for Simplicity: The All Convolutional Net, Very Deep Convolutional Networks for Large-Scale Image Recognition, Going Deeper with Convolutions, Deep Residual Learning for Image Recognition Panopto Link  
  10-Mar 7-3009 Networks Architectures for image classification (II) as above   Panopto Link  
7 14-Mar 54-4011 Recurrent Neural Networks rnn-handout.pdf The Unreasonable Effectiveness of Recurrent Neural Networks Lecture Recording (from last year)  
  17-Mar 7-3009 LSTMs and GRUs lstm-handout.pdf Recurrent Neural Network Regularization    
8 21-Mar 54-4011 Differentiable relaxations (sampling, etc.) relaxation-handout.pdf   Panopto Link  
  24-Mar 7-3009 Attention attention-handout.pdf   Panopto Link  
9 25-Apr 54-4011 Auto-encoders and, unsupervised learning and self-supervision vaes-handout.pdf Blog Post on Autoencoders Panopto Link  
  28-Apr 7-3009 Generative Models Part 1: Differentiable Generator Networks gans-handout.pdf   Panopto Link  
10 02-May - No lecture Bank holiday      
  05-May 7-3009 Generative Models Part 2: Variational Autoencoders vaes-handout.pdf Autoencoding Variational Bayes    
11 09-May 54-4011 Generative Models Part 3: Generative Adversarial Networks gans-handout.pdf GANs, DCGANs    
  12-May 7-3009 Object Detection        
12 16-May 54-4011 Current Research Topics        
  19-May 7-3009 NO LECTURE        

Assorted topic lectures

These are bonus lectures/talks on topics that were requested by students in previous years that you can watch. If there are additional topics that you would like covered, then let us know.

Topic Description Handouts/slides Video
Distributed Learning How can you distribute large models and data over many machines? This is a huge topic, but I made two lectures for advanced machine learning on it (which I’ve also made available here in case you’re not taking it) which cover the basics of both the hardware bottlenecks and the software mitigations to these bottlenecks. Interactive slides and handouts Part 1
Part 2
Attention is (possibly) all you need Recent trends, particularly in models for mining textual data, have used “attentional” mechanisms to get breakthrough performance and move away from recurrent networks; what is this attention and how does it work?   link
Neural architecture search A few people have asked how you design a network architecture; that’s quite a difficult question as it relies on a lot of intuition (possibly with some inspiration from biology) and trial & error. There is an alternative though… Why not let the network design itself? There are a number of approaches to what is called Neural Architecture Search, but most use horribly inefficient Reinforcement Learning, so we’ll just take a little look at a nifty differentiable approach called “DARTS”.   link
Hardware Considerations Deep networks typically require power-hungry hardware and lots of memory. Can you reduce the requirements and optimise for lower-powered hardware?   link

Labs

For 8 of the weeks (starting week 2) we are organising a 2-hour lab session in which you will need to complete a series of worksheets. The worksheets have been designed to put the theory covered in the lectures into context, and the equip you with practical skills in implementing and training differentiable programs. A team of PhD-student demonstrators will be available in the lab to help you with any questions you might have about the topics you are working on.

40% of the marks for the module are for lab work. Each of the 8 lab sessions will be accompanied by an additional assessed exercise for you to work through in your own time. You will have to work through the exercises by yourself and write-up your findings. You will submit your answers/findings/working to all the assessed exercises to handin in week 10 for marking (3rd May, 16:00). Each of the 8 exercises will be worth 5% of your overall module mark. We recommend that you do the exercise accompanying the lab as soon as possible after the lab session, rather than leaving them all to the end.

Labs will start in the second week (11th Feb) 9-11 on Friday mornings. We intend to split the class into lab groups, each of which will be assigned a demonstrator, who will be available to your group for the session in a Team’s side channel. The demonstrators can offer advice on both the labs as well as the group coursework, however you should not ask them about the assessed lab exercises that you complete after the lab.

The full lab schedule is below:

Week Date Location Topic Exercise Link
1 4-Feb   NO LAB  
2 11-Feb Teams Introducing PyTorch Lab 1 Exercise
3 18-Feb Teams Automatic Differentiation Lab 2 Exercise
4 25-Feb Teams Optimisation Lab 3 Exercise
5 04-Mar Teams Implementing simple Neural Networks using PyTorch and Torchbearer Lab 4 Exercise
6 11-Mar Teams Implementing and training Convolutional Neural Networks using PyTorch and Torchbearer Lab 5 Exercise
7 18-Mar Teams Using pretrained models and transfer learning Lab 6 Exercise
8 25-Mar Teams Recurrent Networks, Sequence Prediction and Embeddings Lab 7 Exercise
9 29-April Teams Autoencoders and Deep Generative Models Lab 8 Exercise
10 6-May Teams Coursework Help and Advice  
11 13-May Teams Coursework Help and Advice  
12 20-May NO LAB    

Note: I’ve made all the worksheet links available from last year. Please don’t be surprised if we make some updates before each session!

Online Quizzes

There will be two assessed online-quizzes; We are planning for these to be on the 9th March and 6th May. These will be available on blackboard for a 24 hour period and once started you must complete them within one hour.

Coursework assignment

Information on the coursework assignment (worth 40% of the module) is here.

Where to get additional help

Talk to us! You are more than welcome to arrange to meet to discuss issues related to the course during lab sessions or by appointment. The lab sessions are also facilitated by a team of our PhD students who are experts in the deep learning / differentiable programming field in their own right (many of them have published work in this space, or are close to achieving that). We can be reached by Kate’s email or Jon’s email

Copyright ©2022 The University of Southampton. All rights reserved.