Neural Network Verification with Vehicle

Wednesday 23rd August, MORNING SESSION

Ekaterina Komendantskaya, Southampton University and Luca Arnaboldi, University of Birmingham.

Machine learning components, such as neural networks, gradually make their way into software; and, when the software is critically safe, the machine learning components must be verifiably safe. This gives rise to the problem of neural network verification. The community has been making rapid progress in methods for incorporating logical specifications into neural networks, both in training and verification. However, to truly unlock the ability to verify real-world neural network-enhanced systems we believe the following is necessary.

  1. The specification should be written once and should automatically work with training and verification tools.
  2. The specification should be written in a manner independent of any particular neural network training/inference platform.
  3. The specification should be able to be written as a high-level property over the problem space, rather than a property over the embedding space.
  4. After verification the specification should be exportable to general interactive theorem provers so that its proof can be incorporated into proofs about the larger system around the neural network.

In this tutorial we will present Vehicle, a tool that allows users to do all of this. We will provide an introduction to the Vehicle specification language, and then walk attendees through using it to express a variety of famous and not-so-famous specifications. Subsequently we demonstrate how a specification can be compiled down to i) Tensorflow graphs to be used as loss functions during training, ii) queries for network verifiers, and iii) cross-compiled to Agda, a main-stream interactive theorem prover. Finally we will discuss some of the technical challenges in the implementation as well as some of the outstanding problems

PREPARATION:

Attendees are encouraged to check the pre-requisites for Vehicle here: https://vehicle-lang.github.io/tutorial/#prerequisites

SPEAKERS:

Ekaterina Komendantskaya is a Professor in Computer Science at Southampton University, and the National Robotarium Fellow at Heriot-Watt University. She is an expert in methods linking AI and ML on the one hand, and Logic and Programming Languages, on the other hand. She is a co-director of the Scottish Programming Language Institute (SPLI) and leads the Lab for AI and Verification at Heriot-Watt. She has received more than £4.5M of funding from EPSRC, NCSC, SICSA. Currently she is leading a £3M EPSRC project “AI Secure and Explainable by Construction (AISEC)“.

Luca Arnaboldi is an Assistant Professor of Cyber Security in the School of Computer Science, University of Birmingham. His research currently focuses on security of autonomous systems, explainable AI and formal verification of machine learning algorithms. He is particularly interested in the area of automated verification of security protocols, to which end he has co-developed a tool for the automated translation from a graphical design interface to a formal specification of the protocols as well as working correct by design implementations, MetaCP.