Machine Learning Tutorials

In the Machine Learning Tutorial Series, external guest speakers will give tutorial lectures on focused machine learning topics. The target audience are undergraduates, MSc and PhD students, post-docs and interested faculty members.
All talks will be announced via the ml-talks mailing list.

If you are looking for previous tutorials, check out the ML Tutorials Archive.

Spring 2019

Normally, the talks will be on Wednesdays, 14:00 – 16:00.



 Date Speaker Title
2019-02-27 Arthur Gretton (UCL) Kernel methods for hypothesis testing and sample generation
2019-03-13 Pawan Kumar (University of Oxford) Neural Network Verification



Kernel methods for hypothesis testing and sample generation (Arthur Gretton, 2019-02-27 )
In this tutorial, I will provide an introduction to distribution embeddings using kernels, with applications including hypothesis testing and sample generation through a Generative Adversarial Network (GAN). I’ll begin with two constructions of divergences on probability distributions: as a difference in feature means, and using a class of well behaved witness functions that detects where the distributions are most different. I’ll introduce the Maximum Mean Discrepancy, which can be viewed in terms of both interpretations. I’ll discuss how to choose features to increase the statistical power of two-sample tests based on the MMD; I’ll then describe how the MMD features can be weakened to make them more suitable for training GANs. Time permitting, I’ll briefly cover additional applications, such as dependence detection and testing goodness-of-fit for statistical models.
Neural Network Verification (Pawan Kumar, 2019-03-13)
In recent years, deep neural networks have been successfully employed to improve the performance of several tasks in computer vision, natural language processing and other related areas of machine learning. This has resulted in the launch of several ambitious projects where a human will be replaced by neural networks. Such projects include safety critical applications such as autonomous navigation and personalised medicine. Given the high risk of a wrong decision in such applications, a key step in the deployment of neural networks is their formal verification: proving that a neural network satisfies a desirable property, or generating a counter-example to show that it does not. This tutorial will summarise the progress made in neural network verification thus far.

The contents of the tutorial are divided in three parts.
Part 1: Unsound methods, which can be used to show that some of the false properties are indeed false.
Part 2: Incomplete methods, which can be used to show that some of the true properties are indeed true.
Part 3: Complete methods, which combine unsound and incomplete methods to provide formal verification.