Machine Learning Tutorials

In the Machine Learning Tutorial Series, external guest speakers will give tutorial lectures on focused machine learning topics. The target audience are MSc and PhD students as well as interested faculty members.
All talks will be announced via the ml-talks mailing list.

If you are looking for previous tutorials, check out the ML Tutorials Archive.

Spring 2017

Normally, the talks will be on Wednesdays, 14:00 – 16:00.

Schedule

February 22, 2017 Jon Cockayne University of Warwick Probabilistic Numerical Methods Room: 145 (Huxley) video
March 1, 2017 Amy Nicholson Microsoft Can Machine Learning predict whether you’ll come to my talk? Room: 145 (Huxley)
March 15, 2017 Roderick Murray-Smith University of Glasgow Machine Learning and Human Computer Interaction: Sensing, inference & control Room: 145 (Huxley)

Abstracts

Jon Cockayne (University of Warwick, 2017-02-22)

The field of probabilistic numerics has experienced a surge in research in recent years. In this talk we introduce the field and present recent developments. First, the conjugate problems of integration and linear partial differential equations will be presented in detail. In these problems a Gaussian prior combined with a linear system of equations yields a closed-form posterior. This will be followed by presentation of a recent contribution which outlines the rigorous statistical principles underlying the field. Definitions and well-posedness results for “Bayesian” probabilistic numerical methods will be presented, as well as algorithms for sampling from the intractable posterior distribution in such settings.

Amy Nicholson (Microsoft, 2017-03-01): Can Machine Learning predict whether you’ll come to my talk?

Artificial Intelligence, Machine Learning, Deep Learning, Data Science, Expert Systems; there are many terms that might not be completely clear if you don’t have a PhD, and possibly even if you do. During this session we will discuss where Machine Learning contributes to the overall AI story, as well as run through a unique Machine Learning challenge! Can I use historical data from Microsoft’s previous Future Decoded events, to try and predict who might attend what session at the conference?

This journey and challenge will take us around the Azure Machine Learning service showing how you can go from raw data to a deployed web service using a data science process; utilising tools in between such as R/Python scripts and Jupyter Notebooks.

Physicist Niels Bohr said, “making predictions is very difficult, especially about the future”, let’s see if Azure can help us out.

Roderick Murray-Smith (University of Glasgow, 201-03-15): Machine Learning and Human Computer Interaction: Sensing, inference & control

The opportunities for interaction with computer systems are rapidly expanding beyond traditional input and output paradigms: full-body motion sensors, brain-computer interfaces, 3D displays, touch panels are now commonplace commercial items. The profusion of new sensing devices for human input and the new display channels which are becoming available offer the potential to create more involving, expressive and efficient interactions in a much wider range of contexts. Dealing with these complex sources of human intention requires appropriate mathematical methods; modelling and analysis of interactions requires sophisticated methods which can transform streams of data from complex sensors into estimates of human intention.

This tutorial will focus on the use of inference and dynamical modelling in human-computer interaction. The combination of modern statistical inference and real-time closed loop modelling offers rich possibilities in building interactive systems, but there is a significant gap between the techniques commonly used in HCI and the mathematical tools available in other fields of computing science. This tutorial aims to illustrate how to bring these mathematical tools to bear on interaction problems, and will cover basic theory and example applications from:

– mobile interaction

– interaction with large music collections. This will include work on the Bang & Olufsen Beomoment product and Syntonetic’s Moodgalaxy which combines Gaussian process priors, nonlinear dimensionality reduction and inferred moods to give you new ways to explore your music collection. I will also summarise some of our recent work on using the entropy of inferred mood and genre features to understand users’ criteria for playlist curation).

– 3D human motion and 3D capacitive sensing systems. Future interactions will often be Casual interactions which are flowing ‘around device’ or ‘over device’ interactions, potentially combined with speech recognition technologies, and look at the role of control theory and information theory in analysis of such systems. I will give examples where we have developed 3D capacitive touch systems using particle filters and deep convolutional networks to infer finger pose and position above the surface of the device, and then created a series of ‘flow-based interactions’ which allow more carefree around device gesturing.

– (Joint work with Jörg Müller & Antti Oulasvirta) I will present an empirical comparison of four models from manual control theory on their ability to model targetting behaviour by human users using a mouse: McRuer’s Crossover, Costello’s Surge, second-order lag (2OL), and the Bang-bang model. Such dynamic models are generative, estimating not only movement time, but also pointer position, velocity, and acceleration on a moment-to-moment basis. We describe an experimental framework for acquiring pointing actions and automatically fitting the parameters of mathematical models to the empirical data. We present the use of time-series, phase space and Hooke plot visualisations of the experimental data, to gain insight into human pointing dynamics. We find that the identified control models can generate a range of dynamic behaviours that captures aspects of human pointing behaviour to varying degrees. Conditions with a low index of difficulty (ID) showed poorer fit because their unconstrained nature leads naturally to more dynamic variability. We report on characteristics of human surge behaviour in pointing. We describe trade-offs among the models. We conclude that control theory offers a promising complement to Fitts’ law based approaches in HCI, with models providing representations and predictions of human pointing dynamics which can improve our understanding of pointing and inform design.

– I will describe some work done together with the Optics group in Physics ( QuantIC Quantum Imaging Hub ), comparing Deep Convolutional Autoencoders with classical inverse problem approaches and linear transformations of Gaussian process priors for solving inverse problems in Single Pixel Cameras. Single pixel cameras combine a Digital Mirror Array with a single exotic sensor. This allows rapid prototyping of sensors with specific properties and frequency ranges, which allows them to go beyond conventional silicon sensors.