This year we’re expanding our practical sessions into two parallel tracks covering Fundamentals in one and more Specialized topics in the other.

NOTE:

  • If you’re new to machine learning and deep learning, we suggest that you follow the Fundamental track.
  • If you have some experience training deep learning models already, you may be interested in the more advanced topics covered in our Specialized track.

Fundamental Track

​1a. Machine Learning Fundamentals: We introduce the idea of classification (sorting things into categories) using a machine-learning model. We explore the relationship between a classifier’s parameters and its decision boundary (a line that separates predictions of categories) and also introduce the idea of a loss function. Finally, we briefly introduce Tensorflow. [Colab Notebook]
2a. Deep Feedforward Networks: We implement and train a feed-forward neural network (or “multi-layer perceptron”) on a dataset called “Fashion MNIST”, consisting of small greyscale images of items of clothing. We consider the practical issues around generalisation to out-of-sample data and introduce some important techniques for addressing this. [Colab Notebook]
3a. Convolutional Networks: We cover the basics of convolutional neural networks (“ConvNets”). ConvNets were invented in the late 1980s, and have had tremendous success especially with computer vision (dealing with image data), although they have also been used very successfully in speech processing pipelines, and more recently, for machine translation. [Colab Notebook]
4a. Recurrent Neural Networks: Recurrent neural networks (RNNs) were designed to be able to handle sequential data (eg text or speech), and in this practical we will take a closer look at RNNs and then build a model that can generate English sentences in the style of Shakespeare! [Colab Notebook]

Specialized Track


​Important Note: Please note that most of the Specialized sessions require pre-work (before the Indaba!) in order for you to get the most out of them. The pre-work for each of these sessions is to read through them and go through the background knowledge, making sure that you are comfortable with most of it, to make sure that you will be able to enjoy and get the most out of attending the session.
1b. Build your own TensorFlow: This practical covers the basic idea behind Automatic Differentiation, a powerful software technique that allows us to quickly and easily compute gradients for all kinds of numerical programs. We will build a small Python framework that allows us to train our own simple neural networks, like Tensorflow does, but using only Numpy. NOTE: This practical is particularly long, so coming adequately prepared is very important.

  • Pre-work: Please read through all the background sections in the practical.
  • Background knowledge requirements:
    • Linear algebra (multiplying matrices, row vectors, column vectors, summation notation)
    • Calculus (derivatives and partial derivatives, Jacobian matrix)
    • Deep Learning (have used a framework like TensorFlow or pytorch before)

[Colab Notebook]
2b. Optimization for Deep Learning: We take a deep dive into optimization, an essential part of deep learning, and machine learning in general. We’ll take a look at the tools that allow as to turn a random collection of weights into a state-of-the-art model for any number of applications. More specifically, we’ll implement a few standard optimisation algorithms for finding the minimum of Rosenbrock’s banana function and then we’ll try them out on FashionMNIST.
Background knowledge requirements:
Everyone with an undergraduate background in calculus should be able to enjoy this practical.
[Colab Notebook]
3b. Deep Generative Models: We will investigate two kinds of deep generative models, namely Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). We will first train a GAN to generate images of clothing, and then apply VAEs to the same problem.
Background knowledge requirements:
Supervised learning
ConvNets (deep learning for image data)
Probability theory (the normal distribution, conditional and marginal probabilities, expected values, Bayes’ rule)
[Colab Notebook]
4b. Reinforcement Learning: We explore the reinforcement learning problem using the OpenAI Gym environment. We will then build agents that control the environments in three different ways: An agent that takes random actions, a neural net agent trained with random search, and lastly a neural net agent trained using a policy gradient algorithm.
Background knowledge requirements:
MLPs (neural networks basics)
Supervised learning
Probability theory (expectations)
​​[Colab Notebook]

Getting Help

There will be several tutors around during the session to help with the sessions to help you move through the practicals.
Helping each other is part of the Indaba spirit. You can do this using the #practicals channel on Slack.