Machine Learning Tutorials
In the Machine Learning Tutorial Series, external guest speakers will give tutorial lectures on focused machine learning topics. The target audience are undergraduates, MSc and PhD students, postdocs and interested faculty members.
All talks will be announced via the mltalks mailing list.
If you are looking for previous tutorials, check out the ML Tutorials Archive.
Spring 2019
Normally, the talks will be on Wednesdays, 14:00 – 16:00.
Sponsors
Schedule
Date 
Speaker 
Title 




20190227 
Arthur Gretton (UCL) 
Kernel methods for hypothesis testing and sample generation 




20190313 
Pawan Kumar (University of Oxford) 
Neural Network Verification 




Abstracts
Kernel methods for hypothesis testing and sample generation (Arthur Gretton, 20190227 )
In this tutorial, I will provide an introduction to distribution embeddings using kernels, with applications including hypothesis testing and sample generation through a Generative Adversarial Network (GAN). I’ll begin with two constructions of divergences on probability distributions: as a difference in feature means, and using a class of well behaved witness functions that detects where the distributions are most different. I’ll introduce the Maximum Mean Discrepancy, which can be viewed in terms of both interpretations. I’ll discuss how to choose features to increase the statistical power of twosample tests based on the MMD; I’ll then describe how the MMD features can be weakened to make them more suitable for training GANs. Time permitting, I’ll briefly cover additional applications, such as dependence detection and testing goodnessoffit for statistical models.
Neural Network Verification (Pawan Kumar, 20190313)
In recent years, deep neural networks have been successfully employed to improve the performance of several tasks in computer vision, natural language processing and other related areas of machine learning. This has resulted in the launch of several ambitious projects where a human will be replaced by neural networks. Such projects include safety critical applications such as autonomous navigation and personalised medicine. Given the high risk of a wrong decision in such applications, a key step in the deployment of neural networks is their formal verification: proving that a neural network satisfies a desirable property, or generating a counterexample to show that it does not. This tutorial will summarise the progress made in neural network verification thus far.
The contents of the tutorial are divided in three parts.
Part 1: Unsound methods, which can be used to show that some of the false properties are indeed false.
Part 2: Incomplete methods, which can be used to show that some of the true properties are indeed true.
Part 3: Complete methods, which combine unsound and incomplete methods to provide formal verification.